This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjiahuif: upgrade sigs.k8s.io/structured-merge-diff/v4 to v4.2.1
ResultFAILURE
Tests 1 failed / 4059 succeeded
Started2022-01-14 18:34
Elapsed35m42s
Revision31205dc7d9b271ede68ef90d09f416588ea3afdd
Refs 107565

Test Failures


k8s.io/kubernetes/test/integration/client TestExtractModifyApply_ForceOwnership 4.89s

go test -v k8s.io/kubernetes/test/integration/client -run TestExtractModifyApply_ForceOwnership$
=== RUN   TestExtractModifyApply_ForceOwnership
E0114 18:57:42.274028  118439 controller.go:201] Unable to remove endpoints from kubernetes service: Get "https://127.0.0.1:38897/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:38897: connect: connection refused
    testserver.go:380: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
I0114 18:57:42.932671  118439 serving.go:342] Generated self-signed cert (/tmp/kubernetes-kube-apiserver2557836671/apiserver.crt, /tmp/kubernetes-kube-apiserver2557836671/apiserver.key)
I0114 18:57:42.932687  118439 server.go:546] external host was not specified, using 127.0.0.1
W0114 18:57:42.932698  118439 authentication.go:523] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
    testserver.go:215: runtime-config=map[api/all:true]
    testserver.go:216: Starting kube-apiserver on port 35087...
W0114 18:57:43.822522  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.822544  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.822561  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.822861  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.823489  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.823528  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.823704  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.823734  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.823769  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.823811  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.824066  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.824275  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.824365  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0114 18:57:43.824385  118439 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0114 18:57:43.824394  118439 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0114 18:57:43.824474  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:43.824491  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0114 18:57:43.825914  118439 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0114 18:57:43.825947  118439 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0114 18:57:43.859206  118439 genericapiserver.go:538] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
W0114 18:57:43.859325  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0114 18:57:43.860140  118439 instance.go:274] Using reconciler: lease
W0114 18:57:44.216457  118439 genericapiserver.go:538] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.219017  118439 genericapiserver.go:538] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.243022  118439 genericapiserver.go:538] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.245397  118439 genericapiserver.go:538] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.253769  118439 genericapiserver.go:538] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.270311  118439 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.270340  118439 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0114 18:57:44.272662  118439 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.272684  118439 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0114 18:57:44.295993  118439 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources.
W0114 18:57:44.296025  118439 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources.
W0114 18:57:44.299088  118439 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
I0114 18:57:44.304742  118439 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0114 18:57:44.304758  118439 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0114 18:57:44.306464  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0114 18:57:44.338415  118439 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
W0114 18:57:44.338892  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
    testserver.go:235: Waiting for /healthz to be ok...
I0114 18:57:45.828059  118439 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/tmp/kubernetes-kube-apiserver2557836671/proxy-ca.crt"
I0114 18:57:45.828075  118439 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/tmp/kubernetes-kube-apiserver2557836671/client-ca.crt"
I0114 18:57:45.828320  118439 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/kubernetes-kube-apiserver2557836671/apiserver.crt::/tmp/kubernetes-kube-apiserver2557836671/apiserver.key"
I0114 18:57:45.828699  118439 secure_serving.go:201] Serving securely on 127.0.0.1:35087
I0114 18:57:45.828778  118439 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 18:57:45.828849  118439 autoregister_controller.go:141] Starting autoregister controller
I0114 18:57:45.828860  118439 cache.go:32] Waiting for caches to sync for autoregister controller
I0114 18:57:45.828871  118439 apf_controller.go:317] Starting API Priority and Fairness config controller
I0114 18:57:45.828902  118439 crdregistration_controller.go:111] Starting crd-autoregister controller
I0114 18:57:45.828910  118439 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0114 18:57:45.828849  118439 available_controller.go:491] Starting AvailableConditionController
I0114 18:57:45.828941  118439 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
W0114 18:57:45.829249  118439 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0114 18:57:45.828837  118439 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0114 18:57:45.829286  118439 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0114 18:57:45.829429  118439 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0114 18:57:45.829439  118439 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0114 18:57:45.829501  118439 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/tmp/kubernetes-kube-apiserver2557836671/client-ca.crt"
I0114 18:57:45.829609  118439 controller.go:83] Starting OpenAPI AggregationController
I0114 18:57:45.831618  118439 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/tmp/kubernetes-kube-apiserver2557836671/proxy-ca.crt"
I0114 18:57:45.834460  118439 customresource_discovery_controller.go:209] Starting DiscoveryController
I0114 18:57:45.836010  118439 controller.go:85] Starting OpenAPI controller
I0114 18:57:45.836039  118439 naming_controller.go:291] Starting NamingConditionController
I0114 18:57:45.836069  118439 establishing_controller.go:76] Starting EstablishingController
I0114 18:57:45.836090  118439 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0114 18:57:45.836109  118439 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0114 18:57:45.836211  118439 crd_finalizer.go:266] Starting CRDFinalizer
W0114 18:57:45.851395  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.851695  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.857382  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.857498  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
I0114 18:57:45.929813  118439 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0114 18:57:45.929899  118439 cache.go:39] Caches are synced for autoregister controller
I0114 18:57:45.930172  118439 shared_informer.go:247] Caches are synced for crd-autoregister
I0114 18:57:45.930198  118439 cache.go:39] Caches are synced for AvailableConditionController controller
I0114 18:57:45.930339  118439 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0114 18:57:45.932140  118439 apf_controller.go:322] Running API Priority and Fairness config worker
W0114 18:57:45.939346  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.947141  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.949935  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
I0114 18:57:45.950937  118439 controller.go:611] quota admission added evaluator for: namespaces
W0114 18:57:45.956914  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.964818  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.965074  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.996008  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:45.996102  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.006458  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.016499  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.016516  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.025558  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.025893  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.037363  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.038186  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.044369  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.051803  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.054535  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.062907  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.063180  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.073226  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.073251  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.082929  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.083745  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.091693  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.091964  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.129792  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.130738  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.139142  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.139459  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.147465  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.147958  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.155837  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.155945  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.162876  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.162882  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.172536  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.172869  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.183024  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.191485  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.198108  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.205657  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.213777  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.220620  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.247535  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.259152  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
W0114 18:57:46.266551  118439 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema is deprecated in v1.26+, unavailable in v1.29+
I0114 18:57:46.829442  118439 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0114 18:57:46.829473  118439 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0114 18:57:46.870349  118439 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
I0114 18:57:46.893207  118439 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
I0114 18:57:46.893234  118439 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0114 18:57:47.032602  118439 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.0.0.1]
I0114 18:57:47.114908  118439 controller.go:611] quota admission added evaluator for: deployments.apps
W0114 18:57:47.123918  118439 lease.go:233] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0114 18:57:47.126622  118439 controller.go:240] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8, ::1/128)
    client_test.go:1380: Reapplied apply configuration did not match expected, got:
          &v1.DeploymentApplyConfiguration{
          	TypeMetaApplyConfiguration:   {Kind: &"Deployment", APIVersion: &"apps/v1"},
          	ObjectMetaApplyConfiguration: &{Name: &"nginx-apply", Namespace: &"default"},
          	Spec: &v1.DeploymentSpecApplyConfiguration{
          		Replicas: nil,
          		Selector: nil,
          		Template: &v1.PodTemplateSpecApplyConfiguration{
          			ObjectMetaApplyConfiguration: nil,
          			Spec: &v1.PodSpecApplyConfiguration{
          				Volumes:        nil,
          				InitContainers: nil,
          				Containers: []v1.ContainerApplyConfiguration{
          					{
        - 						Name:       &"nginx",
        + 						Name:       &"sidecar",
        - 						Image:      nil,
        + 						Image:      &"nginx:1.14.3",
          						Command:    nil,
          						Args:       nil,
        - 						WorkingDir: &"/tmp/v2",
        + 						WorkingDir: nil,
          						Ports:      nil,
          						EnvFrom:    nil,
          						... // 15 identical fields
          					},
          					{
        - 						Name:       &"sidecar",
        + 						Name:       &"nginx",
        - 						Image:      &"nginx:1.14.3",
        + 						Image:      nil,
          						Command:    nil,
          						Args:       nil,
        - 						WorkingDir: nil,
        + 						WorkingDir: &"/tmp/v2",
          						Ports:      nil,
          						EnvFrom:    nil,
          						... // 15 identical fields
          					},
          				},
          				EphemeralContainers: nil,
          				RestartPolicy:       nil,
          				... // 31 identical fields
          			},
          		},
          		Strategy:        nil,
          		MinReadySeconds: nil,
          		... // 3 identical fields
          	},
          	Status: nil,
          }
        
W0114 18:57:47.135639  118439 cacher.go:150] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
W0114 18:57:47.135972  118439 cacher.go:150] Terminating all watchers from cacher *core.LimitRange
W0114 18:57:47.136157  118439 cacher.go:150] Terminating all watchers from cacher *core.ResourceQuota
W0114 18:57:47.136363  118439 cacher.go:150] Terminating all watchers from cacher *core.Secret
W0114 18:57:47.136886  118439 cacher.go:150] Terminating all watchers from cacher *core.ConfigMap
W0114 18:57:47.137085  118439 cacher.go:150] Terminating all watchers from cacher *core.Namespace
W0114 18:57:47.137273  118439 cacher.go:150] Terminating all watchers from cacher *core.Endpoints
W0114 18:57:47.137645  118439 cacher.go:150] Terminating all watchers from cacher *core.Pod
W0114 18:57:47.137858  118439 cacher.go:150] Terminating all watchers from cacher *core.ServiceAccount
W0114 18:57:47.138269  118439 cacher.go:150] Terminating all watchers from cacher *core.Service
W0114 18:57:47.139682  118439 cacher.go:150] Terminating all watchers from cacher *networking.IngressClass
W0114 18:57:47.140138  118439 cacher.go:150] Terminating all watchers from cacher *node.RuntimeClass
W0114 18:57:47.140994  118439 cacher.go:150] Terminating all watchers from cacher *scheduling.PriorityClass
W0114 18:57:47.141349  118439 cacher.go:150] Terminating all watchers from cacher *storage.StorageClass
W0114 18:57:47.142281  118439 cacher.go:150] Terminating all watchers from cacher *flowcontrol.FlowSchema
W0114 18:57:47.142502  118439 cacher.go:150] Terminating all watchers from cacher *flowcontrol.PriorityLevelConfiguration
W0114 18:57:47.143175  118439 cacher.go:150] Terminating all watchers from cacher *admissionregistration.ValidatingWebhookConfiguration
W0114 18:57:47.143387  118439 cacher.go:150] Terminating all watchers from cacher *admissionregistration.MutatingWebhookConfiguration
W0114 18:57:47.143700  118439 cacher.go:150] Terminating all watchers from cacher *apiregistration.APIService
I0114 18:57:47.144288  118439 dynamic_cafile_content.go:210] "Failed to remove file watch, it may have been deleted" file="/tmp/kubernetes-kube-apiserver2557836671/client-ca.crt" err="can't remove non-existent inotify watch for: /tmp/kubernetes-kube-apiserver2557836671/client-ca.crt"
--- FAIL: TestExtractModifyApply_ForceOwnership (4.89s)

				from junit_20220114-185129.xml

Filter through log files | View test history on testgrid


Show 4059 Passed Tests

Show 58 Skipped Tests

Error lines from build-log.txt

... skipping 76 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 158: bogus-expected-to-fail: command not found
!!! [0114 18:39:59] Call tree:
!!! [0114 18:39:59]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0114 18:39:59]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0114 18:39:59]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:134 juLog(...)
!!! [0114 18:39:59]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:162 record_command(...)
!!! [0114 18:39:59]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0114 18:39:59] Running kubeadm tests
+++ [0114 18:40:03] Building go targets for linux/amd64:
    cmd/kubeadm
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kubeadm
+++ [0114 18:40:57] Running tests without code coverage 
{"Time":"2022-01-14T18:41:36.861810112Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t37.228s\n"}
... skipping 204 lines ...
+++ [0114 18:44:02] Building go targets for linux/amd64:
    cmd/kube-controller-manager
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kube-controller-manager
+++ [0114 18:44:30] Generate kubeconfig for controller-manager
+++ [0114 18:44:30] Starting controller-manager
I0114 18:44:30.788145   57462 serving.go:348] Generated self-signed cert in-memory
W0114 18:44:30.973157   57462 authentication.go:419] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0114 18:44:30.973203   57462 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0114 18:44:30.973214   57462 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0114 18:44:30.973235   57462 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0114 18:44:30.973257   57462 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0114 18:44:30.973289   57462 controllermanager.go:176] Version: v1.24.0-alpha.1.624+7d177afc059b70
I0114 18:44:30.974780   57462 secure_serving.go:201] Serving securely on [::]:10257
I0114 18:44:30.974905   57462 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 18:44:30.975180   57462 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0114 18:44:30.989318   53915 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
... skipping 89 lines ...
I0114 18:44:31.123175   57462 pvc_protection_controller.go:103] "Starting PVC protection controller"
I0114 18:44:31.123199   57462 shared_informer.go:240] Waiting for caches to sync for PVC protection
I0114 18:44:31.123322   57462 controllermanager.go:585] Started "replicaset"
W0114 18:44:31.123341   57462 controllermanager.go:550] "tokencleaner" is disabled
I0114 18:44:31.123477   57462 replica_set.go:186] Starting replicaset controller
I0114 18:44:31.123505   57462 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
E0114 18:44:31.123657   57462 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0114 18:44:31.123683   57462 controllermanager.go:563] Skipping "service"
I0114 18:44:31.124007   57462 controllermanager.go:585] Started "persistentvolume-expander"
I0114 18:44:31.124126   57462 expand_controller.go:342] Starting expand controller
I0114 18:44:31.124148   57462 shared_informer.go:240] Waiting for caches to sync for expand
W0114 18:44:31.124343   57462 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0114 18:44:31.124376   57462 controllermanager.go:585] Started "endpointslicemirroring"
... skipping 54 lines ...
I0114 18:44:31.144718   57462 shared_informer.go:240] Waiting for caches to sync for disruption
W0114 18:44:31.144692   57462 controllermanager.go:563] Skipping "nodeipam"
I0114 18:44:31.144728   57462 garbagecollector.go:146] Starting garbage collector controller
I0114 18:44:31.144993   57462 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0114 18:44:31.145023   57462 graph_builder.go:289] GraphBuilder running
I0114 18:44:31.145086   57462 node_lifecycle_controller.go:77] Sending events to api server
E0114 18:44:31.145217   57462 core.go:212] failed to start cloud node lifecycle controller: no cloud provider provided
W0114 18:44:31.145241   57462 controllermanager.go:563] Skipping "cloud-node-lifecycle"
I0114 18:44:31.145534   57462 controllermanager.go:585] Started "pv-protection"
I0114 18:44:31.145649   57462 pv_protection_controller.go:79] Starting PV protection controller
I0114 18:44:31.145666   57462 shared_informer.go:240] Waiting for caches to sync for PV protection
I0114 18:44:31.148163   57462 shared_informer.go:240] Waiting for caches to sync for resource quota
W0114 18:44:31.172412   57462 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 42 lines ...
I0114 18:44:31.534750   57462 shared_informer.go:247] Caches are synced for resource quota
I0114 18:44:31.549167   57462 shared_informer.go:247] Caches are synced for resource quota
I0114 18:44:31.975632   57462 shared_informer.go:247] Caches are synced for garbage collector
I0114 18:44:32.045292   57462 shared_informer.go:247] Caches are synced for garbage collector
I0114 18:44:32.045324   57462 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
node/127.0.0.1 created
W0114 18:44:32.572608   57462 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [0114 18:44:32] Checking kubectl version
Client Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.0-alpha.1.624+7d177afc059b70", GitCommit:"7d177afc059b70123b6aef89e5551766897c7c50", GitTreeState:"clean", BuildDate:"2022-01-14T16:19:50Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.0-alpha.1.624+7d177afc059b70", GitCommit:"7d177afc059b70123b6aef89e5551766897c7c50", GitTreeState:"clean", BuildDate:"2022-01-14T16:19:50Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   36s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 114 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0114 18:44:37] Creating namespace namespace-1642185877-7257
namespace/namespace-1642185877-7257 created
Context "test" modified.
+++ [0114 18:44:37] Testing RESTMapper
+++ [0114 18:44:38] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 62 lines ...
namespace/namespace-1642185885-23362 created
Context "test" modified.
+++ [0114 18:44:45] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1642185893-1225 created
Context "test" modified.
+++ [0114 18:44:54] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 440 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0114 18:45:07.291849   62301 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 2 milliseconds
I0114 18:45:07.294417   62301 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3D95faa447-2380-4b5e-ae0e-76250987de8e&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.6:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0114 18:45:24] "kubectl patch with resourceVersion 607" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0114 18:45:25.284772   57462 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.6
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0114 18:45:35] Creating namespace namespace-1642185935-21792
namespace/namespace-1642185935-21792 created
Context "test" modified.
+++ [0114 18:45:35] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 43 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0114 18:45:35] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

+++ Running case: test-cmd.run_kubectl_apply_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 29 lines ...
I0114 18:45:38.585924   57462 event.go:294] "Event occurred" object="namespace-1642185935-30494/test-deployment-retainkeys-8494bdd944" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-8494bdd944-mncws"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0114 18:45:39.564480   66115 helpers.go:616] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 29 lines ...
(Bpod/b created
apply.sh:208: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:209: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
I0114 18:45:48.745100   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642185935-30494/prune-svc" clusterIPs=map[IPv4:10.0.0.5]
service/prune-svc created
I0114 18:45:49.664668   57462 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1642185932-16631
... skipping 38 lines ...
apply.sh:262: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
apply.sh:266: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:277: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:281: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:303: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:304: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:312: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:320: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0114 18:46:18.142229   57462 namespace_controller.go:185] Namespace has been deleted nsb
apply.sh:326: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:332: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 5 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:350: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0114 18:46:26.650815   57462 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
I0114 18:46:27.506942   53915 controller.go:611] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
... skipping 32 lines ...
message:886
has:886
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0114 18:46:30] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 68 lines ...
+++ [0114 18:46:32] Creating namespace namespace-1642185992-23085
namespace/namespace-1642185992-23085 created
Context "test" modified.
+++ [0114 18:46:33] Testing kubectl run
pod/nginx-extensions created (dry run)
W0114 18:46:33.875342   53915 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0114 18:46:33.876890   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
pod/nginx-extensions created (server dry run)
run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/nginx-extensions created
run.sh:39: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: nginx-extensions:
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_filter_tests
+++ [0114 18:46:34] Creating namespace namespace-1642185994-16434
namespace/namespace-1642185994-16434 created
Context "test" modified.
+++ [0114 18:46:34] Testing kubectl create filter
W0114 18:46:34.775407   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:46:34.775445   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 17 lines ...
(Bdeployment.apps/my-depl configured
apps.sh:136: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
(Bapps.sh:137: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
(Bapps.sh:138: Successful get deployments my-depl {{.metadata.labels.l1}}: <no value>
(Bdeployment.apps "my-depl" deleted
replicaset.apps "my-depl-57db5db5c9" deleted
E0114 18:46:37.323786   57462 replica_set.go:534] sync "namespace-1642185995-9015/my-depl-57db5db5c9" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-57db5db5c9": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1642185995-9015/my-depl-57db5db5c9, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 920e0a4a-049e-49de-b622-5643a40057f6, UID in object meta: 
pod "my-depl-57db5db5c9-xqzxf" deleted
apps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:145: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:146: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:150: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0114 18:46:37.845973   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-67874f748d to 3"
I0114 18:46:37.854645   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx-67874f748d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67874f748d-scnzh"
I0114 18:46:37.863741   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx-67874f748d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67874f748d-2vf45"
I0114 18:46:37.863774   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx-67874f748d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67874f748d-cx8cb"
W0114 18:46:37.941574   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:46:37.941607   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:154: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1642185995-9015\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1642185995-9015"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
W0114 18:46:43.075339   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:46:43.075377   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx configured
I0114 18:46:46.394988   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-84cc67fc67 to 3"
I0114 18:46:46.402713   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx-84cc67fc67" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-84cc67fc67-t6qpk"
I0114 18:46:46.411967   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx-84cc67fc67" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-84cc67fc67-bs7z2"
I0114 18:46:46.417902   57462 event.go:294] "Event occurred" object="namespace-1642185995-9015/nginx-84cc67fc67" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-84cc67fc67-29bdm"
Successful
... skipping 17 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_diff_tests
+++ [0114 18:46:51] Creating namespace namespace-1642186011-10562
namespace/namespace-1642186011-10562 created
Context "test" modified.
+++ [0114 18:46:51] Testing kubectl diff
W0114 18:46:51.230832   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:46:51.230868   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:diff -u -N /tmp/LIVE-2579197247/v1.Pod.namespace-1642186011-10562.test-pod /tmp/MERGED-88668511/v1.Pod.namespace-1642186011-10562.test-pod
--- /tmp/LIVE-2579197247/v1.Pod.namespace-1642186011-10562.test-pod	2022-01-14 18:46:51.304032366 +0000
+++ /tmp/MERGED-88668511/v1.Pod.namespace-1642186011-10562.test-pod	2022-01-14 18:46:51.304032366 +0000
@@ -0,0 +1,55 @@
+apiVersion: v1
... skipping 466 lines ...
+++ [0114 18:46:58] Creating namespace namespace-1642186018-17676
namespace/namespace-1642186018-17676 created
Context "test" modified.
+++ [0114 18:46:59] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1642186018-17676 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1642186018-17676 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0114 18:47:00.967279   69712 loader.go:372] Config loaded from file:  /tmp/tmp.McTURoRUmR/.kube/config
I0114 18:47:00.972562   69712 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0114 18:47:01.008616   69712 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
I0114 18:47:01.010406   69712 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 529 lines ...
message:NAME               DATA   AGE
kube-root-ca.crt   1      8s
one                0      0s
three              0      0s
two                0      0s
has not:watch is only supported on individual resources
W0114 18:47:08.071967   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:08.072005   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:
has not:watch is only supported on individual resources
+++ [0114 18:47:08] Creating namespace namespace-1642186028-9811
namespace/namespace-1642186028-9811 created
Context "test" modified.
... skipping 58 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2022-01-14T18:47:08Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2022-01-14T18:47:08Z"}}, "name":"valid-pod", "namespace":"namespace-1642186028-9811", "resourceVersion":"1063", "uid":"36de7f8e-4dea-4d9d-b25b-e17537382122"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2022-01-14T18:47:08Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2022-01-14T18:47:08Z"}],"name":"valid-pod","namespace":"namespace-1642186028-9811","resourceVersion":"1063","uid":"36de7f8e-4dea-4d9d-b25b-e17537382122"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2022-01-14T18:47:08Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2022-01-14T18:47:08Z]] name:valid-pod namespace:namespace-1642186028-9811 resourceVersion:1063 uid:36de7f8e-4dea-4d9d-b25b-e17537382122] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 84 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 37 lines ...
+++ [0114 18:47:14] Creating namespace namespace-1642186034-11882
namespace/namespace-1642186034-11882 created
Context "test" modified.
+++ [0114 18:47:14] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0114 18:47:14] Creating namespace namespace-1642186034-30471
namespace/namespace-1642186034-30471 created
Context "test" modified.
+++ [0114 18:47:15] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0114 18:47:16.059249   57462 event.go:294] "Event occurred" object="namespace-1642186034-30471/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-k2tpc"
I0114 18:47:16.069574   57462 event.go:294] "Event occurred" object="namespace-1642186034-30471/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-88jff"
I0114 18:47:16.069615   57462 event.go:294] "Event occurred" object="namespace-1642186034-30471/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-sk4wj"
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-88jff does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-88jff does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"dd6e4063-b700-4721-ba01-4d00458eb114","resourceVersion":"1141","creationTimestamp":"2022-01-14T18:47:17Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"dd6e4063-b700-4721-ba01-4d00458eb114","resourceVersion":"1142","creationTimestamp":"2022-01-14T18:47:17Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"dd6e4063-b700-4721-ba01-4d00458eb114","resourceVersion":"1142","creationTimestamp":"2022-01-14T18:47:17Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"dd6e4063-b700-4721-ba01-4d00458eb114"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 74 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
message:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 104 lines ...
has:Timeout
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 151 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:282: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:284: Successful get foos/test {{.patched}}: <no value>
(B+++ [0114 18:47:30] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 323 lines ...
(Bcrd.sh:505: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:510: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:513: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0114 18:47:44] Testing recursive resources
+++ [0114 18:47:44] Creating namespace namespace-1642186064-13379
namespace/namespace-1642186064-13379 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0114 18:47:45.575292   53915 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0114 18:47:45.576940   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0114 18:47:45.671385   53915 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0114 18:47:45.673060   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0114 18:47:45.761957   53915 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0114 18:47:45.763409   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0114 18:47:45.875674   53915 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0114 18:47:45.876995   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1642186064-13379
Priority:     0
Node:         <none>
... skipping 154 lines ...
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
W0114 18:47:46.800751   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:46.800784   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0114 18:47:47.169492   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:47.169525   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
W0114 18:47:47.474115   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:47.474155   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
replicationcontroller/busybox1 created
I0114 18:47:48.685162   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-jfzcm"
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 18:47:48.714960   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-6x4z8"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(BI0114 18:47:49.289333   57462 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0114 18:47:49.626217   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:49.626261   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(BW0114 18:47:49.866284   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:49.866316   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 18:47:49.868833   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642186064-13379/busybox0" clusterIPs=map[IPv4:10.0.0.49]
I0114 18:47:49.889863   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642186064-13379/busybox1" clusterIPs=map[IPv4:10.0.0.159]
W0114 18:47:49.967160   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:49.967197   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0114 18:47:50.423980   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-vm9fp"
I0114 18:47:50.441954   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-wj8xf"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
I0114 18:47:51.207309   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/nginx1-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-7997c7844 to 2"
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 18:47:51.211007   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/nginx0-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-c8966c5d7 to 2"
I0114 18:47:51.221059   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/nginx1-deployment-7997c7844" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-7997c7844-85nd8"
I0114 18:47:51.221097   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/nginx0-deployment-c8966c5d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-c8966c5d7-rmbwf"
I0114 18:47:51.229547   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/nginx1-deployment-7997c7844" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-7997c7844-6slcj"
I0114 18:47:51.229588   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/nginx0-deployment-c8966c5d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-c8966c5d7-tb2bq"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:400: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
replicationcontroller/busybox1 created
I0114 18:47:53.464963   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-dv6dh"
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 18:47:53.474317   57462 event.go:294] "Event occurred" object="namespace-1642186064-13379/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-f6j4r"
W0114 18:47:53.502444   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:53.502481   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:404: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:no rollbacker has been implemented for "ReplicationController"
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0114 18:47:54] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1471: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bquery for namespaces had limit param
query for resourcequotas had limit param
query for limitranges had limit param
... skipping 125 lines ...
I0114 18:47:55.699027   74873 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1642186064-13379 200 OK in 1 milliseconds
I0114 18:47:55.700221   74873 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1642186064-13379/resourcequotas?limit=500 200 OK in 1 milliseconds
I0114 18:47:55.701515   74873 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1642186064-13379/limitranges?limit=500 200 OK in 1 milliseconds
I0114 18:47:55.703271   74873 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb 200 OK in 1 milliseconds
I0114 18:47:55.704622   74873 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/resourcequotas?limit=500 200 OK in 1 milliseconds
I0114 18:47:55.705863   74873 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/limitranges?limit=500 200 OK in 1 milliseconds
(BW0114 18:47:56.016651   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:56.016685   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "my-namespace" deleted
W0114 18:47:56.161612   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:47:56.161648   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1482: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BI0114 18:48:01.707069   57462 shared_informer.go:240] Waiting for caches to sync for resource quota
I0114 18:48:01.707117   57462 shared_informer.go:247] Caches are synced for resource quota
Successful
... skipping 34 lines ...
namespace "namespace-1642186039-5579" deleted
namespace "namespace-1642186040-5396" deleted
namespace "namespace-1642186042-8206" deleted
namespace "namespace-1642186043-31313" deleted
namespace "namespace-1642186064-13379" deleted
namespace "nsb" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1642185874-21860" deleted
... skipping 30 lines ...
namespace "namespace-1642186039-5579" deleted
namespace "namespace-1642186040-5396" deleted
namespace "namespace-1642186042-8206" deleted
namespace "namespace-1642186043-31313" deleted
namespace "namespace-1642186064-13379" deleted
namespace "nsb" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1489: Successful get namespaces/quotas {{.metadata.name}}: quotas
(BI0114 18:48:02.143156   57462 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0114 18:48:02.143220   57462 shared_informer.go:247] Caches are synced for garbage collector
core.sh:1490: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
... skipping 9 lines ...
I0114 18:48:02.648481   75082 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0114 18:48:02.673399   75082 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas?limit=500 200 OK in 2 milliseconds
I0114 18:48:02.675934   75082 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas/test-quota 200 OK in 1 milliseconds
(BI0114 18:48:02.812496   57462 resource_quota_controller.go:311] Resource quota has been deleted quotas/test-quota
resourcequota "test-quota" deleted
namespace "quotas" deleted
W0114 18:48:03.190866   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:03.190903   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 18:48:04.133071   57462 horizontal.go:360] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1642186064-13379
I0114 18:48:04.160720   57462 horizontal.go:360] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1642186064-13379
W0114 18:48:05.003826   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:05.003859   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0114 18:48:06.135807   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:06.135846   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1511: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1515: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1519: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1523: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1525: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1532: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1536: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 125 lines ...
(Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I0114 18:48:20.315479   57462 namespace_controller.go:185] Namespace has been deleted other
W0114 18:48:23.361830   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:23.361864   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 9 lines ...
(Bnamespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created (dry run)
configmap/test-configmap created (server dry run)
W0114 18:48:25.063897   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:25.063932   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:46: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created
configmap/test-binary-configmap created
W0114 18:48:25.242597   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:25.242631   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:51: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:52: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(Bquery for configmaps had limit param
query for events had limit param
query for configmaps had user-specified limit param
Successful describe configmaps verbose logs:
... skipping 6 lines ...
I0114 18:48:25.636174   76265 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.name%3Dtest-binary-configmap%2CinvolvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3Ddad2a79b-4efb-4c09-9584-7bffd46095eb&limit=500 200 OK in 1 milliseconds
I0114 18:48:25.638017   76265 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/test-configmap 200 OK in 1 milliseconds
I0114 18:48:25.639558   76265 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.name%3Dtest-configmap%2CinvolvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3Df67b0287-ff6c-463d-b6b4-c098bd6632ef&limit=500 200 OK in 1 milliseconds
(Bconfigmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
W0114 18:48:28.424375   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:28.424415   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 18:48:28.879353   57462 namespace_controller.go:185] Namespace has been deleted test-secrets
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0114 18:48:31] Creating namespace namespace-1642186111-31442
namespace/namespace-1642186111-31442 created
Context "test" modified.
+++ [0114 18:48:31] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 58 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 56 lines ...
Annotations:      batch.kubernetes.io/job-tracking: 
                  cronjob.kubernetes.io/instantiate: manual
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Fri, 14 Jan 2022 18:48:39 +0000
Pods Statuses:    1 Active / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=f8925b0c-af02-4e8b-8e0d-74667a1c74cb
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 342 lines ...
IPs:               10.0.0.243
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(BW0114 18:48:49.054844   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:48:49.054883   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
query for services had limit param
query for events had limit param
query for services had user-specified limit param
Successful describe services verbose logs:
I0114 18:48:49.085964   77880 loader.go:372] Config loaded from file:  /tmp/tmp.McTURoRUmR/.kube/config
I0114 18:48:49.091491   77880 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
... skipping 116 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
message:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BI0114 18:48:50.726144   57462 namespace_controller.go:185] Namespace has been deleted test-jobs
service "redis-master" deleted
core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
... skipping 142 lines ...
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:91: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
apps.sh:103: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:104: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 36 lines ...
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-gtcpn
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-f4fjw
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-ppnv7
(BW0114 18:49:00.927736   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:49:00.927778   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1222: Successful describe
Name:         frontend
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1642186139-22393
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0114 18:49:01.949609   57462 replica_set.go:205] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1642186139-22393  e96bd484-d852-49ce-a0e9-bb6fc0e2c6c4 2061 2 2022-01-14 18:49:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2022-01-14 18:49:00 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2022-01-14 18:49:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0026a8028 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0114 18:49:01.975627   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-gtcpn"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0114 18:49:02.475164   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-nq5v4"
core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 22 lines ...
I0114 18:49:03.893373   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-65547564b5 to 3"
I0114 18:49:03.905599   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-cjc97"
I0114 18:49:03.915947   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-2c95p"
I0114 18:49:03.916837   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-s6zd2"
deployment.apps/nginx-deployment scaled
I0114 18:49:03.982017   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-65547564b5 to 1"
E0114 18:49:03.990322   57462 replica_set.go:534] sync "namespace-1642186139-22393/nginx-deployment-65547564b5" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-65547564b5": the object has been modified; please apply your changes to the latest version and try again
I0114 18:49:04.019936   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-65547564b5-2c95p"
I0114 18:49:04.032707   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-65547564b5-cjc97"
core.sh:1288: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
(Bdeployment.apps "nginx-deployment" deleted
I0114 18:49:04.286128   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642186139-22393/expose-test-deployment" clusterIPs=map[IPv4:10.0.0.181]
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0114 18:49:04.647845   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-65547564b5 to 3"
I0114 18:49:04.684161   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-tx2tt"
I0114 18:49:04.698058   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-x85g2"
... skipping 25 lines ...
(Bpod "valid-pod" deleted
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
I0114 18:49:06.938679   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642186139-22393/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs=map[IPv4:10.0.0.237]
Successful
... skipping 16 lines ...
I0114 18:49:07.965640   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-nz2bq"
I0114 18:49:07.965930   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-mfjx5"
replicationcontroller/redis-slave created
I0114 18:49:08.169299   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/redis-slave" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-wr4gh"
I0114 18:49:08.179186   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/redis-slave" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-zq6n7"
core.sh:1385: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(BW0114 18:49:08.255374   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:49:08.255411   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1389: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicationcontroller "frontend" deleted
replicationcontroller "redis-slave" deleted
core.sh:1393: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1397: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
... skipping 4 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1403: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1407: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1416: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0114 18:49:09.928310   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-5f9f5f86d to 3"
I0114 18:49:09.937403   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f9f5f86d-zh62s"
I0114 18:49:09.944147   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f9f5f86d-54rld"
I0114 18:49:09.944349   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f9f5f86d-zbdg7"
core.sh:1422: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1423: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1424: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0114 18:49:10.304611   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-7b667bb8bd to 1"
I0114 18:49:10.337663   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources-7b667bb8bd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-7b667bb8bd-66qwz"
core.sh:1427: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1428: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0114 18:49:10.683643   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-5f9f5f86d to 2"
I0114 18:49:10.710384   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-77dcc977fd to 1"
I0114 18:49:10.719460   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-5f9f5f86d-zbdg7"
I0114 18:49:10.720144   57462 event.go:294] "Event occurred" object="namespace-1642186139-22393/nginx-deployment-resources-77dcc977fd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-77dcc977fd-c7hv7"
core.sh:1433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 155 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1444: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1445: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1446: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=d789bf5f8
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=d789bf5f8
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 59 lines ...
(Bdeployment.apps "nginx-with-command" deleted
apps.sh:236: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/deployment-with-unixuserid created
I0114 18:49:14.253957   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/deployment-with-unixuserid" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set deployment-with-unixuserid-5f68bfb886 to 1"
I0114 18:49:14.262461   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/deployment-with-unixuserid-5f68bfb886" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-with-unixuserid-5f68bfb886-2m847"
apps.sh:240: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
(BW0114 18:49:14.420532   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:49:14.420600   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "deployment-with-unixuserid" deleted
apps.sh:247: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0114 18:49:14.729322   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-65547564b5 to 3"
I0114 18:49:14.737987   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-zwfbr"
I0114 18:49:14.771884   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-zhfcm"
... skipping 52 lines ...
apps.sh:311: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:315: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:322: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:326: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0114 18:49:21.873513   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-68d6dd6c9d to 2"
I0114 18:49:21.894475   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-66b877d8c6 to 1"
I0114 18:49:21.901799   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-68d6dd6c9d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-68d6dd6c9d-kfgcm"
I0114 18:49:21.901959   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-66b877d8c6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-66b877d8c6-ldwhb"
Successful
... skipping 81 lines ...
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0114 18:49:24.591048   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-d694cbfb7 to 1"
I0114 18:49:24.600476   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment-d694cbfb7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-d694cbfb7-vbl46"
apps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:374: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:379: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:380: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:383: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:384: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 45 lines ...
I0114 18:49:28.302531   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment-7485945f99" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-7485945f99-sm2hb"
I0114 18:49:28.302688   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment-84db766476" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-84db766476-9cvj2"
deployment.apps/nginx-deployment env updated
deployment.apps/nginx-deployment env updated
deployment.apps/nginx-deployment env updated
Successful
message:error: standard input cannot be used for multiple arguments
has:standard input cannot be used for multiple arguments
deployment.apps "nginx-deployment" deleted
I0114 18:49:28.758393   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-686d795775 to 0"
I0114 18:49:28.783636   57462 event.go:294] "Event occurred" object="namespace-1642186152-10085/nginx-deployment-686d795775" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-686d795775-4tpgx"
E0114 18:49:28.874524   57462 replica_set.go:534] sync "namespace-1642186152-10085/nginx-deployment-686d795775" failed with replicasets.apps "nginx-deployment-686d795775" not found
configmap "test-set-env-config" deleted
secret "test-set-env-secret" deleted
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests

... skipping 17 lines ...
I0114 18:49:29.889381   57462 event.go:294] "Event occurred" object="namespace-1642186169-23211/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-5748q"
I0114 18:49:29.898546   57462 event.go:294] "Event occurred" object="namespace-1642186169-23211/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-t89d6"
I0114 18:49:29.898583   57462 event.go:294] "Event occurred" object="namespace-1642186169-23211/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-897nx"
apps.sh:567: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0114 18:49:30] Deleting rs
replicaset.apps "frontend" deleted
E0114 18:49:30.121308   57462 replica_set.go:534] sync "namespace-1642186169-23211/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1642186169-23211/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 2917c420-ca9d-4cd4-97ed-d93fa9573529, UID in object meta: 
apps.sh:571: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:573: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-5748q" deleted
pod "frontend-897nx" deleted
pod "frontend-t89d6" deleted
apps.sh:576: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1642186169-23211
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 226 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:716: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(BSuccessful
message:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 38 lines ...
namespace/namespace-1642186180-7163 created
Context "test" modified.
+++ [0114 18:49:40] Testing kubectl(v1:statefulsets, v1:controllerrevisions)
apps.sh:456: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BFlag --record has been deprecated, --record will be removed in the future
statefulset.apps/nginx created
W0114 18:49:41.119920   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:49:41.119960   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:460: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1642186180-7163"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]:
(Bstatefulset.apps/nginx skipped rollback (current template already matches revision 1)
apps.sh:463: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:464: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BFlag --record has been deprecated, --record will be removed in the future
... skipping 24 lines ...
(Bapps.sh:475: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:476: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:480: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:484: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:485: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:488: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:489: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 63 lines ...
Name:         mock
Namespace:    namespace-1642186184-20891
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 2 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: mock-2vdqg
W0114 18:49:45.306794   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:49:45.306829   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "mock" deleted
replicationcontroller "mock" deleted
I0114 18:49:45.363439   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642186184-20891/mock" clusterIPs=map[IPv4:10.0.0.181]
service/mock replaced
replicationcontroller/mock replaced
I0114 18:49:45.380376   57462 event.go:294] "Event occurred" object="namespace-1642186184-20891/mock" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-rw8h2"
... skipping 47 lines ...
Name:         mock
Namespace:    namespace-1642186184-20891
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 25 lines ...
replicationcontroller/mock annotated
generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true
(Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true
(Bservice "mock" deleted
replicationcontroller "mock" deleted
Testing with file hack/testdata/multi-resource-json.json and replace with file hack/testdata/multi-resource-json-modify.json
W0114 18:49:49.082526   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:49:49.082566   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0114 18:49:49.346318   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642186184-20891/mock" clusterIPs=map[IPv4:10.0.0.58]
service/mock created
replicationcontroller/mock created
I0114 18:49:49.362075   57462 event.go:294] "Event occurred" object="namespace-1642186184-20891/mock" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-hv7kz"
... skipping 24 lines ...
Name:         mock
Namespace:    namespace-1642186184-20891
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 42 lines ...
Namespace:    namespace-1642186184-20891
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1642186184-20891
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 36 lines ...
I0114 18:49:53.975570   53915 alloc.go:327] "allocated clusterIPs" service="namespace-1642186184-20891/mock2" clusterIPs=map[IPv4:10.0.0.172]
service/mock2 created
generic-resources.sh:70: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:mock2:
(BNAME    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
mock    ClusterIP   10.0.0.231   <none>        99/TCP    1s
mock2   ClusterIP   10.0.0.172   <none>        99/TCP    1s
W0114 18:49:54.224507   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:49:54.224567   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Name:              mock
Namespace:         namespace-1642186184-20891
Labels:            app=mock
Annotations:       <none>
Selector:          app=mock
Type:              ClusterIP
... skipping 86 lines ...
I0114 18:49:58.167089   87249 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 2 milliseconds
I0114 18:49:58.170022   87249 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds
I0114 18:49:58.180639   87249 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dpv0003%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPersistentVolume%2CinvolvedObject.uid%3D9ebfc94f-bdde-43b1-a050-3ae5cdf8690f&limit=500 200 OK in 9 milliseconds
(Bpersistentvolume "pv0003" deleted
storage.sh:44: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0114 18:49:58.623033   57462 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:47: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 88 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 32 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 39 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 14 Jan 2022 18:44:32 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 14 Jan 2022 18:44:32 +0000   Fri, 14 Jan 2022 18:45:36 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 149 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 59 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:855: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:856: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:857: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:858: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 411 lines ...
namespace-1642186196-26073   default   0         18s
namespace-1642186198-29810   default   0         15s
namespace-1642186209-25678   default   0         5s
some-other-random            default   0         6s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
W0114 18:50:18.916829   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:50:18.916864   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "all-ns-test-2" deleted
I0114 18:50:25.091387   57462 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
W0114 18:50:25.273120   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:50:25.273161   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
get.sh:392: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:396: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:400: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
(BSuccessful
... skipping 9 lines ...
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
has:PodSecurityPolicy is deprecated
Successful
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
error: 1 warning received
has:PodSecurityPolicy is deprecated
Successful
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
error: 1 warning received
has:error: 1 warning received
Recording: run_template_output_tests
Running command: run_template_output_tests

+++ Running case: test-cmd.run_template_output_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_template_output_tests
... skipping 556 lines ...
node/127.0.0.1 cordoned (server dry run)
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1642186234-27175/test-pod-1
evicting pod namespace-1642186234-27175/test-pod-1 (server dry run)
node/127.0.0.1 drained (server dry run)
node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1642186234-27175/test-pod-1
W0114 18:50:37.748415   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:50:37.748454   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0114 18:50:41.023894   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:50:41.023934   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0114 18:51:09.367662   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:51:09.367703   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:node/127.0.0.1 cordoned
evicting pod namespace-1642186234-27175/test-pod-1
pod "test-pod-1" has DeletionTimestamp older than 1 seconds, skipping
node/127.0.0.1 drained
has:evicting pod .*/test-pod-1
... skipping 14 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BSuccessful
... skipping 78 lines ...
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1642186234-27175/test-pod-1, namespace-1642186234-27175/test-pod-2
evicting pod namespace-1642186234-27175/test-pod-1 (dry run)
evicting pod namespace-1642186234-27175/test-pod-2 (dry run)
node/127.0.0.1 drained (dry run)
has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 18 lines ...
+++ [0114 18:51:12] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0114 18:51:12] Testing impersonation
Successful
message:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
Successful
message:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:60: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:61: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 19 lines ...
I0114 18:51:14.202408   57462 event.go:294] "Event occurred" object="namespace-1642186274-20347/test-1" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-1-68fd6f9fb to 1"
I0114 18:51:14.221057   57462 event.go:294] "Event occurred" object="namespace-1642186274-20347/test-1-68fd6f9fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-1-68fd6f9fb-7rbd8"
deployment.apps/test-2 created
I0114 18:51:14.265259   57462 event.go:294] "Event occurred" object="namespace-1642186274-20347/test-2" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-2-79d95cd99 to 1"
I0114 18:51:14.277186   57462 event.go:294] "Event occurred" object="namespace-1642186274-20347/test-2-79d95cd99" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-2-79d95cd99-8fsds"
wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2,
(BW0114 18:51:15.799439   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:51:15.799473   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-1" deleted
deployment.apps "test-2" deleted
Successful
message:deployment.apps/test-1 condition met
deployment.apps/test-2 condition met
has:test-1 condition met
... skipping 57 lines ...
(Bdebug.sh:93: Successful get pod/node-debugger-127.0.0.1-p9kf9 {{(index .spec.volumes 0).hostPath.path}}: /
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "node-debugger-127.0.0.1-p9kf9" force deleted
+++ exit code: 0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
No resources found
W0114 18:51:20.028430   57462 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 18:51:20.028474   57462 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
No resources found
+++ [0114 18:51:20] TESTS PASSED
I0114 18:51:20.072671   53915 controller.go:198] Shutting down kubernetes service endpoint reconciler
I0114 18:51:20.072671   53915 object_count_tracker.go:84] "StorageObjectCountTracker pruner is exiting"
I0114 18:51:20.072967   53915 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::hack/testdata/ca.crt"
... skipping 11 lines ...
I0114 18:51:20.073304   53915 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
I0114 18:51:20.073321   53915 available_controller.go:503] Shutting down AvailableConditionController
I0114 18:51:20.073337   53915 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0114 18:51:20.073373   53915 controller.go:89] Shutting down OpenAPI AggregationController
I0114 18:51:20.073416   53915 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::hack/testdata/ca.crt"
I0114 18:51:20.073507   53915 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
W0114 18:51:20.073529   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0114 18:51:20.073570   53915 secure_serving.go:246] Stopped listening on 127.0.0.1:6443
I0114 18:51:20.073616   53915 dynamic_serving_content.go:145] "Shutting down controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
W0114 18:51:20.074163   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074235   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074299   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074330   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074393   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074427   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074432   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074502   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074507   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074604   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074648   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074678   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074706   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074758   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074768   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074795   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.074762   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075007   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075019   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075074   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075091   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
{"level":"warn","ts":"2022-01-14T18:51:20.074Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00087ac40/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
W0114 18:51:20.075154   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075172   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E0114 18:51:20.075180   53915 controller.go:201] Unable to remove endpoints from kubernetes service: rpc error: code = Unavailable desc = error reading from server: EOF
W0114 18:51:20.075229   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075342   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075419   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075474   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075490   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075515   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075586   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075651   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075660   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075680   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075714   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075753   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075805   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075808   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075891   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.075952   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076025   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076093   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076155   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076186   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076239   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076311   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076431   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076500   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076599   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076755   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076781   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.076886   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078338   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078618   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078640   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078623   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078669   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078828   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078844   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078891   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078908   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078932   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078970   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:20.078971   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
junit report dir: /tmp/junit-results
+++ [0114 18:51:20] Clean up complete
+ make test-integration
W0114 18:51:20.491660   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.074975   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075003   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075013   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075027   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075032   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075047   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075017   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075096   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075098   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.074975   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075121   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075124   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075135   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075139   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075157   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.074975   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075175   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075185   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075188   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075202   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075188   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075223   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075282   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075289   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075365   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075614   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075662   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075725   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075774   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075904   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075853   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075870   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075964   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076076   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075969   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.075969   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076022   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076124   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076141   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076127   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076199   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076306   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076319   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076368   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076424   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076608   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076669   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076724   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076868   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.076917   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.077014   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.077044   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079361   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079402   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079425   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079436   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079464   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079479   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079503   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079508   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079521   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079564   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079667   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0114 18:51:21.079667   53915 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
+++ [0114 18:51:24] Checking etcd is on PATH
/home/prow/go/src/k8s.io/kubernetes/third_party/etcd/etcd
+++ [0114 18:51:24] Starting etcd instance
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.DGWsfSWwUW --listen-client-urls http://127.0.0.1:2379 --log-level=debug > "/logs/artifacts/etcd.4e5e5631-7568-11ec-959e-0a430601ea38.root.log.DEBUG.20220114-185124.94495" 2>/dev/null
Waiting for etcd to come up.
+++ [0114 18:51:25] On try 2, etcd: : {"health":"true","reason":""}
... skipping 787 lines ...
=== SKIP: vendor/k8s.io/apiextensions-apiserver/test/integration TestLimits/ApplyPatchType_nested_patch_under_limit (0.00s)
    limit_test.go:228: skipping expensive test
W0114 19:05:32.429858  125599 cacher.go:150] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
I0114 19:05:32.430107  125599 object_count_tracker.go:84] "StorageObjectCountTracker pruner is exiting"
I0114 19:05:32.430120  125599 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
I0114 19:05:32.430146  125599 naming_controller.go:302] Shutting down NamingConditionController
I0114 19:05:32.430219  125599 dynamic_serving_content.go:191] "Failed to remove file watch, it may have been deleted" file="/tmp/apiextensions-apiserver1032597747/apiserver.crt" err="can't remove non-existent inotify watch for: /tmp/apiextensions-apiserver1032597747/apiserver.crt"
I0114 19:05:32.430239  125599 crd_finalizer.go:278] Shutting down CRDFinalizer
I0114 19:05:32.430248  125599 establishing_controller.go:87] Shutting down EstablishingController
I0114 19:05:32.430258  125599 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0114 19:05:32.430275  125599 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
    --- SKIP: TestLimits/ApplyPatchType_nested_patch_under_limit (0.00s)
I0114 19:05:32.430150  125599 dynamic_serving_content.go:145] "Shutting down controller" name="serving-cert::/tmp/apiextensions-apiserver1032597747/apiserver.crt::/tmp/apiextensions-apiserver1032597747/apiserver.key"
I0114 19:05:32.430239  125599 secure_serving.go:246] Stopped listening on 127.0.0.1:45605
E0114 19:05:32.430318  125599 dynamic_serving_content.go:140] "Failed to watch cert and key file, will retry later" err="error adding watch for file /tmp/apiextensions-apiserver1032597747/apiserver.crt: no such file or directory"

=== Failed
=== FAIL: test/integration/client TestExtractModifyApply_ForceOwnership (4.89s)
E0114 18:57:42.274028  118439 controller.go:201] Unable to remove endpoints from kubernetes service: Get "https://127.0.0.1:38897/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:38897: connect: connection refused
    testserver.go:380: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
I0114 18:57:42.932671  118439 serving.go:342] Generated self-signed cert (/tmp/kubernetes-kube-apiserver2557836671/apiserver.crt, /tmp/kubernetes-kube-apiserver2557836671/apiserver.key)
I0114 18:57:42.932687  118439 server.go:546] external host was not specified, using 127.0.0.1
W0114 18:57:42.932698  118439 authentication.go:523] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
    testserver.go:215: runtime-config=map[api/all:true]
... skipping 200 lines ...
W0114 18:57:47.141349  118439 cacher.go:150] Terminating all watchers from cacher *storage.StorageClass
W0114 18:57:47.142281  118439 cacher.go:150] Terminating all watchers from cacher *flowcontrol.FlowSchema
W0114 18:57:47.142502  118439 cacher.go:150] Terminating all watchers from cacher *flowcontrol.PriorityLevelConfiguration
W0114 18:57:47.143175  118439 cacher.go:150] Terminating all watchers from cacher *admissionregistration.ValidatingWebhookConfiguration
W0114 18:57:47.143387  118439 cacher.go:150] Terminating all watchers from cacher *admissionregistration.MutatingWebhookConfiguration
W0114 18:57:47.143700  118439 cacher.go:150] Terminating all watchers from cacher *apiregistration.APIService
I0114 18:57:47.144288  118439 dynamic_cafile_content.go:210] "Failed to remove file watch, it may have been deleted" file="/tmp/kubernetes-kube-apiserver2557836671/client-ca.crt" err="can't remove non-existent inotify watch for: /tmp/kubernetes-kube-apiserver2557836671/client-ca.crt"

DONE 4057 tests, 58 skipped, 1 failure in 11.394s
+++ [0114 19:09:53] Saved JUnit XML test report to /logs/artifacts/junit_20220114-185129.xml
make[1]: *** [Makefile:184: test] Error 1
!!! [0114 19:09:53] Call tree:
!!! [0114 19:09:53]  1: hack/make-rules/test-integration.sh:96 runTests(...)
+++ [0114 19:09:53] Cleaning up etcd
+++ [0114 19:09:53] Integration test cleanup complete
make: *** [Makefile:203: test-integration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...