This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 4839 succeeded
Started2022-11-24 10:51
Elapsed50m47s
Revisionmaster

Test Failures


k8s.io/kubernetes/test/integration/garbagecollector TestCRDDeletionCascading 0.00s

go test -v k8s.io/kubernetes/test/integration/garbagecollector -run TestCRDDeletionCascading$
=== RUN   TestCRDDeletionCascading
    testserver.go:414: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
I1124 11:26:36.332342  105310 serving.go:342] Generated self-signed cert (/tmp/kubernetes-kube-apiserver2895620313/apiserver.crt, /tmp/kubernetes-kube-apiserver2895620313/apiserver.key)
I1124 11:26:36.332369  105310 server.go:555] external host was not specified, using 127.0.0.1
W1124 11:26:36.332381  105310 authentication.go:520] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
    testserver.go:245: runtime-config=map[api/all:true]
    testserver.go:246: Starting kube-apiserver on port 32933...
W1124 11:26:36.722557  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.722594  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.722609  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.722947  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.723355  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.723397  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.723795  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.723825  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.723877  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.723910  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.724098  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.724242  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.724289  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:36.724342  105310 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1124 11:26:36.724362  105310 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
W1124 11:26:36.724481  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.724497  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:36.749837  105310 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
W1124 11:26:36.749990  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:36.751022  105310 instance.go:277] Using reconciler: lease
W1124 11:26:36.859271  105310 storage_authentication.go:83] SelfSubjectReview API is disabled because corresponding feature gate APISelfSubjectReview is not enabled.
W1124 11:26:37.007216  105310 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.007266  105310 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
W1124 11:26:37.009544  105310 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.027650  105310 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.030092  105310 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.037949  105310 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.044447  105310 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1124 11:26:37.053180  105310 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.053202  105310 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1124 11:26:37.055476  105310 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.055503  105310 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1124 11:26:37.096421  105310 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.
W1124 11:26:37.096455  105310 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.
W1124 11:26:37.100099  105310 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.113249  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W1124 11:26:37.125554  105310 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
W1124 11:26:37.126176  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
    testserver.go:266: Waiting for /healthz to be ok...
I1124 11:26:38.132564  105310 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/tmp/kubernetes-kube-apiserver2895620313/client-ca.crt"
I1124 11:26:38.132601  105310 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/kubernetes-kube-apiserver2895620313/apiserver.crt::/tmp/kubernetes-kube-apiserver2895620313/apiserver.key"
I1124 11:26:38.132566  105310 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/tmp/kubernetes-kube-apiserver2895620313/proxy-ca.crt"
I1124 11:26:38.133133  105310 secure_serving.go:210] Serving securely on 127.0.0.1:32933
I1124 11:26:38.133246  105310 apf_controller.go:361] Starting API Priority and Fairness config controller
I1124 11:26:38.133282  105310 controller.go:85] Starting OpenAPI controller
I1124 11:26:38.133323  105310 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1124 11:26:38.133356  105310 customresource_discovery_controller.go:288] Starting DiscoveryController
I1124 11:26:38.133441  105310 establishing_controller.go:76] Starting EstablishingController
I1124 11:26:38.133509  105310 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1124 11:26:38.133525  105310 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1124 11:26:38.133551  105310 available_controller.go:494] Starting AvailableConditionController
I1124 11:26:38.133564  105310 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1124 11:26:38.133585  105310 controller.go:83] Starting OpenAPI AggregationController
I1124 11:26:38.133744  105310 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1124 11:26:38.133789  105310 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1124 11:26:38.133809  105310 crd_finalizer.go:266] Starting CRDFinalizer
W1124 11:26:38.134033  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:38.134176  105310 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1124 11:26:38.134189  105310 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
I1124 11:26:38.134218  105310 controller.go:80] Starting OpenAPI V3 AggregationController
I1124 11:26:38.134505  105310 controller.go:85] Starting OpenAPI V3 controller
I1124 11:26:38.134534  105310 naming_controller.go:291] Starting NamingConditionController
I1124 11:26:38.134569  105310 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/tmp/kubernetes-kube-apiserver2895620313/client-ca.crt"
I1124 11:26:38.134643  105310 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/tmp/kubernetes-kube-apiserver2895620313/proxy-ca.crt"
I1124 11:26:38.136980  105310 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/tmp/kubernetes-kube-apiserver2895620313/misty-crt.crt::/tmp/kubernetes-kube-apiserver2895620313/misty-crt.key"
W1124 11:26:38.137339  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:38.137380  105310 gc_controller.go:78] Starting apiserver lease garbage collector
W1124 11:26:38.137587  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:38.137655  105310 controller.go:121] Starting legacy_token_tracking_controller
I1124 11:26:38.137677  105310 shared_informer.go:273] Waiting for caches to sync for configmaps
I1124 11:26:38.138135  105310 autoregister_controller.go:141] Starting autoregister controller
I1124 11:26:38.138155  105310 cache.go:32] Waiting for caches to sync for autoregister controller
I1124 11:26:38.139136  105310 crdregistration_controller.go:111] Starting crd-autoregister controller
I1124 11:26:38.139158  105310 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
W1124 11:26:38.157759  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.158079  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.160788  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.166597  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
I1124 11:26:38.172242  105310 controller.go:615] quota admission added evaluator for: namespaces
I1124 11:26:38.221843  105310 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I1124 11:26:38.234103  105310 cache.go:39] Caches are synced for AvailableConditionController controller
I1124 11:26:38.234148  105310 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1124 11:26:38.234228  105310 apf_controller.go:366] Running API Priority and Fairness config worker
I1124 11:26:38.234238  105310 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I1124 11:26:38.234256  105310 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I1124 11:26:38.237990  105310 shared_informer.go:280] Caches are synced for configmaps
I1124 11:26:38.238350  105310 cache.go:39] Caches are synced for autoregister controller
I1124 11:26:38.239328  105310 shared_informer.go:280] Caches are synced for crd-autoregister
W1124 11:26:38.255689  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.289346  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.306963  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.324940  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.342103  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.356140  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.369405  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.382566  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.382607  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.413367  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.413544  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.423958  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.430968  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.437881  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.445778  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.453168  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.460765  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.467775  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.474511  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.481625  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.488610  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.495919  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.502707  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.510163  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.537530  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.537637  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.548417  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.556499  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.563674  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.571118  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.578474  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.585007  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.591749  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.599199  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.616502  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.632293  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:38.660153  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
I1124 11:26:38.762911  105310 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1124 11:26:39.149047  105310 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I1124 11:26:39.160196  105310 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I1124 11:26:39.160218  105310 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1124 11:26:39.203589  105310 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.0.0.1]
W1124 11:26:39.257735  105310 lease.go:250] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E1124 11:26:39.259698  105310 controller.go:254] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8, ::1/128)
I1124 11:26:39.382033  105310 controller.go:615] quota admission added evaluator for: serviceaccounts
I1124 11:26:39.396490  105310 garbagecollector.go:154] Starting garbage collector controller
I1124 11:26:39.396515  105310 shared_informer.go:273] Waiting for caches to sync for garbage collector
I1124 11:26:39.396531  105310 graph_builder.go:275] garbage controller monitor not synced: no monitors
I1124 11:26:39.396553  105310 graph_builder.go:291] GraphBuilder running
I1124 11:26:39.396562  105310 graph_builder.go:263] started 0 new monitors, 0 currently running
I1124 11:26:39.403386  105310 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations admissionregistration.k8s.io/v1alpha1, Resource=validatingadmissionpolicies admissionregistration.k8s.io/v1alpha1, Resource=validatingadmissionpolicybindings apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta3, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta3, Resource=prioritylevelconfigurations internal.apiserver.k8s.io/v1alpha1, Resource=storageversions networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies networking.k8s.io/v1alpha1, Resource=clustercidrs node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles resource.k8s.io/v1alpha1, Resource=podschedulings resource.k8s.io/v1alpha1, Resource=resourceclaims resource.k8s.io/v1alpha1, Resource=resourceclaimtemplates resource.k8s.io/v1alpha1, Resource=resourceclasses scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments], removed: []
I1124 11:26:39.403418  105310 garbagecollector.go:226] reset restmapper
W1124 11:26:39.409985  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410021  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=configmaps", kind "/v1, Kind=ConfigMap"
W1124 11:26:39.410089  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410096  105310 graph_builder.go:176] using a shared informer for resource "rbac.authorization.k8s.io/v1, Resource=rolebindings", kind "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
W1124 11:26:39.410149  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410171  105310 graph_builder.go:176] using a shared informer for resource "internal.apiserver.k8s.io/v1alpha1, Resource=storageversions", kind "internal.apiserver.k8s.io/v1alpha1, Kind=StorageVersion"
W1124 11:26:39.410205  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410212  105310 graph_builder.go:176] using a shared informer for resource "flowcontrol.apiserver.k8s.io/v1beta3, Resource=prioritylevelconfigurations", kind "flowcontrol.apiserver.k8s.io/v1beta3, Kind=PriorityLevelConfiguration"
W1124 11:26:39.410243  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410258  105310 graph_builder.go:176] using a shared informer for resource "resource.k8s.io/v1alpha1, Resource=resourceclaimtemplates", kind "resource.k8s.io/v1alpha1, Kind=ResourceClaimTemplate"
W1124 11:26:39.410301  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410314  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=services", kind "/v1, Kind=Service"
W1124 11:26:39.410351  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410362  105310 graph_builder.go:176] using a shared informer for resource "apps/v1, Resource=replicasets", kind "apps/v1, Kind=ReplicaSet"
W1124 11:26:39.410388  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410394  105310 graph_builder.go:176] using a shared informer for resource "batch/v1, Resource=cronjobs", kind "batch/v1, Kind=CronJob"
W1124 11:26:39.410420  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410432  105310 graph_builder.go:176] using a shared informer for resource "node.k8s.io/v1, Resource=runtimeclasses", kind "node.k8s.io/v1, Kind=RuntimeClass"
W1124 11:26:39.410466  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410507  105310 graph_builder.go:176] using a shared informer for resource "apiregistration.k8s.io/v1, Resource=apiservices", kind "apiregistration.k8s.io/v1, Kind=APIService"
W1124 11:26:39.410556  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410574  105310 graph_builder.go:176] using a shared informer for resource "autoscaling/v2, Resource=horizontalpodautoscalers", kind "autoscaling/v2, Kind=HorizontalPodAutoscaler"
W1124 11:26:39.410616  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410639  105310 graph_builder.go:176] using a shared informer for resource "networking.k8s.io/v1, Resource=ingressclasses", kind "networking.k8s.io/v1, Kind=IngressClass"
W1124 11:26:39.410680  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410696  105310 graph_builder.go:176] using a shared informer for resource "storage.k8s.io/v1, Resource=storageclasses", kind "storage.k8s.io/v1, Kind=StorageClass"
W1124 11:26:39.410759  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410777  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=resourcequotas", kind "/v1, Kind=ResourceQuota"
W1124 11:26:39.410831  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410864  105310 graph_builder.go:176] using a shared informer for resource "storage.k8s.io/v1, Resource=volumeattachments", kind "storage.k8s.io/v1, Kind=VolumeAttachment"
W1124 11:26:39.410914  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.410928  105310 graph_builder.go:176] using a shared informer for resource "resource.k8s.io/v1alpha1, Resource=resourceclaims", kind "resource.k8s.io/v1alpha1, Kind=ResourceClaim"
W1124 11:26:39.410986  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411005  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=replicationcontrollers", kind "/v1, Kind=ReplicationController"
W1124 11:26:39.411050  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411066  105310 graph_builder.go:176] using a shared informer for resource "discovery.k8s.io/v1, Resource=endpointslices", kind "discovery.k8s.io/v1, Kind=EndpointSlice"
W1124 11:26:39.411107  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411117  105310 graph_builder.go:176] using a shared informer for resource "admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations", kind "admissionregistration.k8s.io/v1, Kind=MutatingWebhookConfiguration"
W1124 11:26:39.411157  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411177  105310 graph_builder.go:176] using a shared informer for resource "coordination.k8s.io/v1, Resource=leases", kind "coordination.k8s.io/v1, Kind=Lease"
W1124 11:26:39.411239  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411259  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=namespaces", kind "/v1, Kind=Namespace"
W1124 11:26:39.411301  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411321  105310 graph_builder.go:176] using a shared informer for resource "batch/v1, Resource=jobs", kind "batch/v1, Kind=Job"
W1124 11:26:39.411367  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411387  105310 graph_builder.go:176] using a shared informer for resource "policy/v1beta1, Resource=podsecuritypolicies", kind "policy/v1beta1, Kind=PodSecurityPolicy"
W1124 11:26:39.411432  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411442  105310 graph_builder.go:176] using a shared informer for resource "storage.k8s.io/v1, Resource=csistoragecapacities", kind "storage.k8s.io/v1, Kind=CSIStorageCapacity"
W1124 11:26:39.411501  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411519  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=serviceaccounts", kind "/v1, Kind=ServiceAccount"
W1124 11:26:39.411572  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411591  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=pods", kind "/v1, Kind=Pod"
W1124 11:26:39.411634  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411641  105310 graph_builder.go:176] using a shared informer for resource "admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations", kind "admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration"
W1124 11:26:39.411682  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411699  105310 graph_builder.go:176] using a shared informer for resource "admissionregistration.k8s.io/v1alpha1, Resource=validatingadmissionpolicies", kind "admissionregistration.k8s.io/v1alpha1, Kind=ValidatingAdmissionPolicy"
W1124 11:26:39.411752  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411770  105310 graph_builder.go:176] using a shared informer for resource "apps/v1, Resource=statefulsets", kind "apps/v1, Kind=StatefulSet"
W1124 11:26:39.411825  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411844  105310 graph_builder.go:176] using a shared informer for resource "rbac.authorization.k8s.io/v1, Resource=roles", kind "rbac.authorization.k8s.io/v1, Kind=Role"
W1124 11:26:39.411887  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411896  105310 graph_builder.go:176] using a shared informer for resource "resource.k8s.io/v1alpha1, Resource=resourceclasses", kind "resource.k8s.io/v1alpha1, Kind=ResourceClass"
W1124 11:26:39.411926  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.411949  105310 graph_builder.go:176] using a shared informer for resource "apps/v1, Resource=daemonsets", kind "apps/v1, Kind=DaemonSet"
W1124 11:26:39.411992  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412009  105310 graph_builder.go:176] using a shared informer for resource "networking.k8s.io/v1alpha1, Resource=clustercidrs", kind "networking.k8s.io/v1alpha1, Kind=ClusterCIDR"
W1124 11:26:39.412052  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412068  105310 graph_builder.go:176] using a shared informer for resource "policy/v1, Resource=poddisruptionbudgets", kind "policy/v1, Kind=PodDisruptionBudget"
W1124 11:26:39.412107  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412123  105310 graph_builder.go:176] using a shared informer for resource "flowcontrol.apiserver.k8s.io/v1beta3, Resource=flowschemas", kind "flowcontrol.apiserver.k8s.io/v1beta3, Kind=FlowSchema"
W1124 11:26:39.412164  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412180  105310 graph_builder.go:176] using a shared informer for resource "scheduling.k8s.io/v1, Resource=priorityclasses", kind "scheduling.k8s.io/v1, Kind=PriorityClass"
W1124 11:26:39.412238  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412257  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=endpoints", kind "/v1, Kind=Endpoints"
W1124 11:26:39.412312  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412402  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=limitranges", kind "/v1, Kind=LimitRange"
W1124 11:26:39.412471  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412489  105310 graph_builder.go:176] using a shared informer for resource "apps/v1, Resource=deployments", kind "apps/v1, Kind=Deployment"
W1124 11:26:39.412528  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412664  105310 graph_builder.go:176] using a shared informer for resource "apps/v1, Resource=controllerrevisions", kind "apps/v1, Kind=ControllerRevision"
W1124 11:26:39.412783  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412806  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=nodes", kind "/v1, Kind=Node"
W1124 11:26:39.412871  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412892  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=persistentvolumeclaims", kind "/v1, Kind=PersistentVolumeClaim"
W1124 11:26:39.412971  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.412991  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=secrets", kind "/v1, Kind=Secret"
W1124 11:26:39.413051  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413072  105310 graph_builder.go:176] using a shared informer for resource "storage.k8s.io/v1, Resource=csinodes", kind "storage.k8s.io/v1, Kind=CSINode"
W1124 11:26:39.413117  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413130  105310 graph_builder.go:176] using a shared informer for resource "certificates.k8s.io/v1, Resource=certificatesigningrequests", kind "certificates.k8s.io/v1, Kind=CertificateSigningRequest"
W1124 11:26:39.413173  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413190  105310 graph_builder.go:176] using a shared informer for resource "networking.k8s.io/v1, Resource=networkpolicies", kind "networking.k8s.io/v1, Kind=NetworkPolicy"
W1124 11:26:39.413234  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413251  105310 graph_builder.go:176] using a shared informer for resource "rbac.authorization.k8s.io/v1, Resource=clusterroles", kind "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
W1124 11:26:39.413294  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413310  105310 graph_builder.go:176] using a shared informer for resource "resource.k8s.io/v1alpha1, Resource=podschedulings", kind "resource.k8s.io/v1alpha1, Kind=PodScheduling"
W1124 11:26:39.413378  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413395  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=podtemplates", kind "/v1, Kind=PodTemplate"
W1124 11:26:39.413455  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413473  105310 graph_builder.go:176] using a shared informer for resource "/v1, Resource=persistentvolumes", kind "/v1, Kind=PersistentVolume"
W1124 11:26:39.413515  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413531  105310 graph_builder.go:176] using a shared informer for resource "storage.k8s.io/v1, Resource=csidrivers", kind "storage.k8s.io/v1, Kind=CSIDriver"
W1124 11:26:39.413572  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413590  105310 graph_builder.go:176] using a shared informer for resource "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", kind "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
W1124 11:26:39.413636  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413653  105310 graph_builder.go:176] using a shared informer for resource "admissionregistration.k8s.io/v1alpha1, Resource=validatingadmissionpolicybindings", kind "admissionregistration.k8s.io/v1alpha1, Kind=ValidatingAdmissionPolicyBinding"
W1124 11:26:39.413712  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413730  105310 graph_builder.go:176] using a shared informer for resource "apiextensions.k8s.io/v1, Resource=customresourcedefinitions", kind "apiextensions.k8s.io/v1, Kind=CustomResourceDefinition"
W1124 11:26:39.413794  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:39.413816  105310 graph_builder.go:176] using a shared informer for resource "networking.k8s.io/v1, Resource=ingresses", kind "networking.k8s.io/v1, Kind=Ingress"
I1124 11:26:39.413839  105310 graph_builder.go:231] synced monitors; added 55, kept 0, removed 0
I1124 11:26:39.414296  105310 graph_builder.go:263] started 55 new monitors, 55 currently running
I1124 11:26:39.414319  105310 garbagecollector.go:242] resynced monitors
I1124 11:26:39.414327  105310 shared_informer.go:273] Waiting for caches to sync for garbage collector
I1124 11:26:39.414343  105310 graph_builder.go:281] garbage controller monitor not yet synced: /v1, Resource=services
W1124 11:26:39.416906  105310 warnings.go:70] networking.k8s.io/v1alpha1 ClusterCIDR is deprecated in v1.28+, unavailable in v1.31+
W1124 11:26:39.417921  105310 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
I1124 11:26:39.418297  105310 graph_builder.go:635] GraphBuilder process object: v1/ServiceAccount, namespace aval, name default, uid 6df5d1f6-7414-43fc-8ec9-7ac92f4cfe3f, event type add, virtual=false
W1124 11:26:39.424190  105310 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
I1124 11:26:39.425002  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.storage.k8s.io, uid 866d8da5-6e1f-4f0e-b291-32e0d5170e9b, event type add, virtual=false
I1124 11:26:39.425065  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.batch, uid b914f9e8-0ee0-41e5-85d6-c9684080e9e6, event type add, virtual=false
I1124 11:26:39.425078  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.discovery.k8s.io, uid 47e08d1e-9f93-4501-8a85-adb3bea4f595, event type add, virtual=false
I1124 11:26:39.425091  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta3.flowcontrol.apiserver.k8s.io, uid ce5fe90a-453c-438a-985b-2cbb2aa59929, event type add, virtual=false
W1124 11:26:39.425232  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
I1124 11:26:39.425305  105310 graph_builder.go:635] GraphBuilder process object: v1/Namespace, namespace , name kube-system, uid 23923e3e-0785-4a4f-ac2d-743dfa4024a4, event type add, virtual=false
I1124 11:26:39.425334  105310 graph_builder.go:635] GraphBuilder process object: v1/Namespace, namespace , name kube-public, uid 1f7fd0c1-c9f2-4778-9ef8-5e82e9490960, event type add, virtual=false
I1124 11:26:39.425349  105310 graph_builder.go:635] GraphBuilder process object: v1/Namespace, namespace , name kube-node-lease, uid 47ac521d-07f0-4720-8c9c-3435fba86ac4, event type add, virtual=false
I1124 11:26:39.425366  105310 graph_builder.go:635] GraphBuilder process object: v1/Namespace, namespace , name default, uid 717d0403-3423-429a-a58c-4d76454fa8e8, event type add, virtual=false
W1124 11:26:39.425369  105310 warnings.go:70] networking.k8s.io/v1alpha1 ClusterCIDR is deprecated in v1.28+, unavailable in v1.31+
I1124 11:26:39.425387  105310 graph_builder.go:635] GraphBuilder process object: v1/Namespace, namespace , name aval, uid 9ed0d16b-fe6a-42e5-bdf6-e7fa90123aad, event type add, virtual=false
I1124 11:26:39.425404  105310 graph_builder.go:635] GraphBuilder process object: v1/Namespace, namespace , name crd-mixed, uid b5a80b46-226f-4cc5-aec3-2fad0ce9d315, event type add, virtual=false
I1124 11:26:39.425440  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1alpha1.storage.k8s.io, uid a426a968-301d-472d-8937-8ca7618dee02, event type add, virtual=false
W1124 11:26:39.425451  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
I1124 11:26:39.425470  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.policy, uid 822079a7-04a9-495f-9383-cfb9c70fd533, event type add, virtual=false
I1124 11:26:39.425493  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.rbac.authorization.k8s.io, uid 165f23d5-7bd8-468a-a79a-53f6f5df0a26, event type add, virtual=false
I1124 11:26:39.425515  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.autoscaling, uid 09134a22-6357-479b-b219-22b611e28a2a, event type add, virtual=false
I1124 11:26:39.425528  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.certificates.k8s.io, uid 1b9f2893-776a-4220-b367-02b006bf18ec, event type add, virtual=false
I1124 11:26:39.425538  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.batch, uid 91cfa6c6-f0c2-4160-b404-2c6b940731c7, event type add, virtual=false
I1124 11:26:39.425548  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.node.k8s.io, uid c0c22814-daab-45c8-8e40-7d7a2520ef3d, event type add, virtual=false
I1124 11:26:39.425558  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1alpha1.resource.k8s.io, uid 2f8c53d5-09a9-4d18-b5a6-4be338cf114f, event type add, virtual=false
I1124 11:26:39.425568  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.authentication.k8s.io, uid ae2d64e8-45b5-4db9-8173-8d95838c447f, event type add, virtual=false
I1124 11:26:39.425580  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v2beta1.autoscaling, uid 57bc36ef-f481-43ea-9a1f-69cd40e1fd6f, event type add, virtual=false
I1124 11:26:39.425592  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.coordination.k8s.io, uid 26966b28-303b-4514-bf3e-a9cfba68e3b5, event type add, virtual=false
I1124 11:26:39.425601  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1alpha1.networking.k8s.io, uid 50d6a5d0-d822-4c7a-9e60-4679e701d986, event type add, virtual=false
I1124 11:26:39.425611  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1alpha1.flowcontrol.apiserver.k8s.io, uid 44d1c425-f0ed-4dd5-a764-f3729bdd085a, event type add, virtual=false
I1124 11:26:39.426021  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.node.k8s.io, uid 89878ce2-ada6-4273-92fd-daea7b5fe1ea, event type add, virtual=false
I1124 11:26:39.426059  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.flowcontrol.apiserver.k8s.io, uid 41ecf709-6696-4fff-9843-67f34c110668, event type add, virtual=false
I1124 11:26:39.426071  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1alpha1.internal.apiserver.k8s.io, uid 3f9e89f6-4251-4a98-9596-dd46de6f3722, event type add, virtual=false
I1124 11:26:39.426087  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1., uid 3110bbdc-4500-4dae-aa63-dbd3f6623bd0, event type add, virtual=false
I1124 11:26:39.426101  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.apps, uid 0a9b8b34-4bc4-4455-8622-d54d60fef462, event type add, virtual=false
I1124 11:26:39.426120  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.authorization.k8s.io, uid b8a3fa01-2fb3-4054-b5ff-2a50cca3394c, event type add, virtual=false
I1124 11:26:39.426132  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v2.autoscaling, uid 9bf05b52-a1cb-48b4-b895-52bad49a4d97, event type add, virtual=false
I1124 11:26:39.426142  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.apiextensions.k8s.io, uid 8405b760-ed67-4cd1-ad7e-58b868857c86, event type add, virtual=false
I1124 11:26:39.426163  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.policy, uid 5637eacc-a826-480b-bded-cd723026a5c1, event type add, virtual=false
I1124 11:26:39.426175  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.storage.k8s.io, uid 1c7408b3-d6dc-4f92-bbdc-1c8a96564778, event type add, virtual=false
I1124 11:26:39.426186  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta2.flowcontrol.apiserver.k8s.io, uid 510c66cb-804d-4493-86b3-3792963ef53c, event type add, virtual=false
I1124 11:26:39.426198  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.events.k8s.io, uid b7b92ee2-5409-4e22-af3f-376a2886453a, event type add, virtual=false
I1124 11:26:39.426210  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.events.k8s.io, uid c8c3f942-a995-4528-8623-745d47f34ec3, event type add, virtual=false
I1124 11:26:39.426220  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.networking.k8s.io, uid 9444db2d-85ea-4243-afbe-941354988245, event type add, virtual=false
I1124 11:26:39.426232  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.scheduling.k8s.io, uid c14f4b97-2f38-45eb-8031-1c5d95078127, event type add, virtual=false
I1124 11:26:39.426243  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1.admissionregistration.k8s.io, uid 4b545eaa-a9a2-4d5b-b5c0-161bbd9b4f5c, event type add, virtual=false
I1124 11:26:39.426253  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1alpha1.admissionregistration.k8s.io, uid ecc16da9-4578-40bf-b6b2-967c70f60021, event type add, virtual=false
I1124 11:26:39.426264  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v2beta2.autoscaling, uid 345a902e-9a81-496d-9936-29575f04f352, event type add, virtual=false
I1124 11:26:39.426275  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.discovery.k8s.io, uid 3fb971a4-ab95-46b2-a457-92eee33ea5d4, event type add, virtual=false
I1124 11:26:39.426287  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name service-accounts, uid d3c34f01-0573-45d7-aeb9-94f2488933ca, event type add, virtual=false
I1124 11:26:39.426300  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name exempt, uid 8ba44ee6-a853-4491-ae3b-562e8431b8e5, event type add, virtual=false
I1124 11:26:39.426320  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name system-nodes, uid b006921a-aaed-4e68-8e9d-b14b9e242a9e, event type add, virtual=false
    garbage_collector_test.go:1221: First pass CRD cascading deletion
I1124 11:26:39.426331  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name probes, uid 46b53bc7-bf37-4b7f-b0b3-09a935343e5a, event type add, virtual=false
I1124 11:26:39.426343  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name system-leader-election, uid ef94787b-3c38-4c3b-a3be-2aa62fe0f4a5, event type add, virtual=false
I1124 11:26:39.426354  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name kube-controller-manager, uid d150235d-b82c-4f05-b335-44c1cd66d795, event type add, virtual=false
I1124 11:26:39.426365  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name kube-system-service-accounts, uid 70b9c2a3-aae2-42b6-8399-7290606fe9a1, event type add, virtual=false
I1124 11:26:39.426392  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name global-default, uid 95a1a2bb-9705-4436-baae-f3e38fb9b2c5, event type add, virtual=false
I1124 11:26:39.426580  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name catch-all, uid c4dbf9df-180f-485f-af71-e4b59a3f075d, event type add, virtual=false
I1124 11:26:39.426596  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name system-node-high, uid b7d7bf64-3e52-4e46-8ec0-7f40327e19cd, event type add, virtual=false
I1124 11:26:39.426616  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name workload-leader-election, uid 01d4a264-3011-4936-b473-4df66973bef5, event type add, virtual=false
I1124 11:26:39.426642  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name endpoint-controller, uid 6f9f5be5-2267-46eb-94a9-ab2af4c6f744, event type add, virtual=false
I1124 11:26:39.426656  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema, namespace , name kube-scheduler, uid c7beae2a-3283-424a-9bfc-ce0123442ddd, event type add, virtual=false
I1124 11:26:39.426668  105310 graph_builder.go:635] GraphBuilder process object: v1/ServiceAccount, namespace crd-mixed, name default, uid c074c266-39ab-4cdf-9117-d5bcde13458c, event type add, virtual=false
I1124 11:26:39.426681  105310 graph_builder.go:635] GraphBuilder process object: v1/Service, namespace default, name kubernetes, uid 27c79b49-c98a-43f1-bd2e-4ee61ee9051a, event type add, virtual=false
I1124 11:26:39.426694  105310 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-apiserver-3x5is7hxcstzsabhcc6fukbrv4, uid 8ff889f2-94ee-449c-8be3-eeef7fc86105, event type add, virtual=false
I1124 11:26:39.426706  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name exempt, uid 46fafa4d-5e16-40ef-bb28-29eb29da073c, event type add, virtual=false
I1124 11:26:39.426717  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name system, uid cda1bdf7-6ccd-4077-a460-3d8b3b3c6ed7, event type add, virtual=false
I1124 11:26:39.426728  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name node-high, uid cb05bfac-0846-4479-9c01-77b89165dce6, event type add, virtual=false
I1124 11:26:39.426738  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name leader-election, uid 92405996-849b-412d-9c87-a4ff30b9ac40, event type add, virtual=false
I1124 11:26:39.426747  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name workload-high, uid 6cb563d3-bcaf-416d-8d66-dd553faa5a50, event type add, virtual=false
I1124 11:26:39.426757  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name workload-low, uid b744c9f9-551b-47e3-a01d-3cbf2f36dd2c, event type add, virtual=false
I1124 11:26:39.426767  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name global-default, uid f2231645-5bb4-4f24-8953-35e89e300643, event type add, virtual=false
I1124 11:26:39.426776  105310 graph_builder.go:635] GraphBuilder process object: flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration, namespace , name catch-all, uid 15d1a41c-33c1-41e5-b06f-bbeb34907189, event type add, virtual=false
I1124 11:26:39.426787  105310 graph_builder.go:635] GraphBuilder process object: scheduling.k8s.io/v1/PriorityClass, namespace , name system-node-critical, uid 47532f91-2552-493f-890a-82314b767579, event type add, virtual=false
I1124 11:26:39.426797  105310 graph_builder.go:635] GraphBuilder process object: scheduling.k8s.io/v1/PriorityClass, namespace , name system-cluster-critical, uid b7406b0a-2394-4b04-af28-8ab84118b80a, event type add, virtual=false
I1124 11:26:39.427453  105310 graph_builder.go:635] GraphBuilder process object: v1/ConfigMap, namespace kube-system, name extension-apiserver-authentication, uid 72c9d2bb-fd63-4a47-8a1e-b81d4bf8058f, event type add, virtual=false
W1124 11:26:39.431457  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+
W1124 11:26:39.431491  105310 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 PriorityLevelConfiguration is deprecated in v1.29+, unavailable in v1.32+
I1124 11:26:39.444853  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 9b6d0e01-b32e-4fd5-91e4-7a310ec60f03, event type add, virtual=false
I1124 11:26:39.458240  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 9b6d0e01-b32e-4fd5-91e4-7a310ec60f03, event type update, virtual=false
I1124 11:26:39.458320  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.mygroup.example.com, uid f1a4f0d6-11e2-4eb4-a3d3-c6e90a7ea7c5, event type add, virtual=false
I1124 11:26:39.485769  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 9b6d0e01-b32e-4fd5-91e4-7a310ec60f03, event type update, virtual=false
I1124 11:26:39.498031  105310 shared_informer.go:280] Caches are synced for garbage collector
I1124 11:26:39.498152  105310 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1124 11:26:39.515256  105310 shared_informer.go:280] Caches are synced for garbage collector
I1124 11:26:39.515287  105310 garbagecollector.go:263] synced garbage collector
I1124 11:26:41.964479  105310 controller.go:615] quota admission added evaluator for: foo8p4k8as.mygroup.example.com
    garbage_collector_test.go:1245: created owner "ownerc2bzc"
I1124 11:26:42.010679  105310 graph_builder.go:635] GraphBuilder process object: v1/ConfigMap, namespace crd-mixed, name dependentzw78l, uid c719df56-0b29-4c47-96b6-c6923eae350a, event type add, virtual=false
I1124 11:26:42.010715  105310 graph_builder.go:371] add virtual node.identity: [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerc2bzc, uid: 8e90903b-4f65-425b-9266-5f8003525649]

I1124 11:26:42.010784  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
    garbage_collector_test.go:1254: created dependent "dependentzw78l"
I1124 11:26:42.010848  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:42.016224  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:42.016311  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:42.026645  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:42.026718  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:42.046872  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:42.046948  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:42.087483  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:42.087588  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:42.168386  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:42.168486  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:42.328687  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:42.328787  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:42.649485  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:42.649567  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:43.290199  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=true
I1124 11:26:43.290284  105310 garbagecollector.go:377] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc0019b7770):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:true, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8p4k8a.
I1124 11:26:44.523055  105310 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [mygroup.example.com/v1beta1, Resource=foo8p4k8as], removed: []
I1124 11:26:44.523080  105310 garbagecollector.go:226] reset restmapper
W1124 11:26:44.529355  105310 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:26:44.529389  105310 graph_builder.go:176] using a shared informer for resource "mygroup.example.com/v1beta1, Resource=foo8p4k8as", kind "mygroup.example.com/v1beta1, Kind=foo8p4k8a"
I1124 11:26:44.529433  105310 graph_builder.go:231] synced monitors; added 1, kept 55, removed 0
I1124 11:26:44.529481  105310 graph_builder.go:263] started 1 new monitors, 56 currently running
I1124 11:26:44.529493  105310 garbagecollector.go:242] resynced monitors
I1124 11:26:44.529500  105310 shared_informer.go:273] Waiting for caches to sync for garbage collector
I1124 11:26:44.529538  105310 graph_builder.go:281] garbage controller monitor not yet synced: mygroup.example.com/v1beta1, Resource=foo8p4k8as
I1124 11:26:44.530886  105310 graph_builder.go:635] GraphBuilder process object: mygroup.example.com/v1beta1/foo8p4k8a, namespace crd-mixed, name ownerc2bzc, uid 8e90903b-4f65-425b-9266-5f8003525649, event type add, virtual=false
I1124 11:26:44.630068  105310 shared_informer.go:280] Caches are synced for garbage collector
I1124 11:26:44.630096  105310 garbagecollector.go:263] synced garbage collector
I1124 11:26:44.630133  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerc2bzc" objectUID=8e90903b-4f65-425b-9266-5f8003525649 kind="foo8p4k8a" virtual=false
I1124 11:26:44.632505  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerc2bzc, uid: 8e90903b-4f65-425b-9266-5f8003525649]'s doesn't have an owner, continue on next item
I1124 11:26:48.510427  105310 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-apiserver-3x5is7hxcstzsabhcc6fukbrv4, uid 8ff889f2-94ee-449c-8be3-eeef7fc86105, event type update, virtual=false
W1124 11:26:49.194410  105310 lease.go:250] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E1124 11:26:49.195939  105310 controller.go:254] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8, ::1/128)
I1124 11:26:49.636842  105310 garbagecollector.go:196] no resource updates from discovery, skipping garbage collector sync
I1124 11:26:52.041240  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 9b6d0e01-b32e-4fd5-91e4-7a310ec60f03, event type update, virtual=false
I1124 11:26:52.059978  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 9b6d0e01-b32e-4fd5-91e4-7a310ec60f03, event type update, virtual=false
I1124 11:26:52.077225  105310 graph_builder.go:635] GraphBuilder process object: mygroup.example.com/v1beta1/foo8p4k8a, namespace crd-mixed, name ownerc2bzc, uid 8e90903b-4f65-425b-9266-5f8003525649, event type delete, virtual=false
I1124 11:26:52.077391  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/dependentzw78l" objectUID=c719df56-0b29-4c47-96b6-c6923eae350a kind="ConfigMap" virtual=false
I1124 11:26:52.085982  105310 garbagecollector.go:409] according to the absentOwnerCache, object c719df56-0b29-4c47-96b6-c6923eae350a's owner mygroup.example.com/v1beta1/foo8p4k8a, ownerc2bzc does not exist in namespace crd-mixed
I1124 11:26:52.086013  105310 garbagecollector.go:548] classify references of [v1/ConfigMap, namespace: crd-mixed, name: dependentzw78l, uid: c719df56-0b29-4c47-96b6-c6923eae350a].
solid: []v1.OwnerReference(nil)
dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerc2bzc", UID:"8e90903b-4f65-425b-9266-5f8003525649", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}
waitingForDependentsDeletion: []v1.OwnerReference(nil)
I1124 11:26:52.086053  105310 garbagecollector.go:613] "Deleting object" object="crd-mixed/dependentzw78l" objectUID=c719df56-0b29-4c47-96b6-c6923eae350a kind="ConfigMap" propagationPolicy=Background
I1124 11:26:52.093616  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 9b6d0e01-b32e-4fd5-91e4-7a310ec60f03, event type delete, virtual=false
I1124 11:26:52.100797  105310 graph_builder.go:635] GraphBuilder process object: v1/ConfigMap, namespace crd-mixed, name dependentzw78l, uid c719df56-0b29-4c47-96b6-c6923eae350a, event type delete, virtual=false
I1124 11:26:52.113846  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.mygroup.example.com, uid f1a4f0d6-11e2-4eb4-a3d3-c6e90a7ea7c5, event type delete, virtual=false
W1124 11:26:53.094212  105310 cacher.go:162] Terminating all watchers from cacher foo8p4k8as.mygroup.example.com
E1124 11:26:53.095368  105310 reflector.go:140] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1124 11:26:54.172922  105310 reflector.go:424] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:26:54.172962  105310 reflector.go:140] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
    garbage_collector_test.go:1225: Second pass CRD cascading deletion
I1124 11:26:54.574123  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 43dd0ee2-5986-4695-93ae-07b92a0446d4, event type add, virtual=false
I1124 11:26:54.582750  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.mygroup.example.com, uid 958f8f74-2daa-4a95-9537-8e33b83f1feb, event type add, virtual=false
I1124 11:26:54.590214  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 43dd0ee2-5986-4695-93ae-07b92a0446d4, event type update, virtual=false
I1124 11:26:54.600084  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 43dd0ee2-5986-4695-93ae-07b92a0446d4, event type update, virtual=false
I1124 11:26:54.644190  105310 garbagecollector.go:196] no resource updates from discovery, skipping garbage collector sync
    garbage_collector_test.go:1245: created owner "ownerdsvtv"
    garbage_collector_test.go:1254: created dependent "dependentmdd8z"
I1124 11:26:57.142374  105310 graph_builder.go:635] GraphBuilder process object: v1/ConfigMap, namespace crd-mixed, name dependentmdd8z, uid 4d5cc628-dd6f-4c21-a2aa-be9e57a2c3db, event type add, virtual=false
I1124 11:26:57.142399  105310 graph_builder.go:371] add virtual node.identity: [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]

I1124 11:26:57.142463  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerdsvtv" objectUID=18c7985b-0a5c-408a-a977-194d755d1e7a kind="foo8p4k8a" virtual=true
I1124 11:26:57.152864  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]'s doesn't have an owner, continue on next item
I1124 11:26:57.152899  105310 garbagecollector.go:387] item [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a] hasn't been observed via informer yet
I1124 11:26:57.158551  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerdsvtv" objectUID=18c7985b-0a5c-408a-a977-194d755d1e7a kind="foo8p4k8a" virtual=true
I1124 11:26:57.161827  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]'s doesn't have an owner, continue on next item
I1124 11:26:57.161862  105310 garbagecollector.go:387] item [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a] hasn't been observed via informer yet
I1124 11:26:57.172476  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerdsvtv" objectUID=18c7985b-0a5c-408a-a977-194d755d1e7a kind="foo8p4k8a" virtual=true
I1124 11:26:57.185276  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]'s doesn't have an owner, continue on next item
I1124 11:26:57.185310  105310 garbagecollector.go:387] item [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a] hasn't been observed via informer yet
I1124 11:26:57.205643  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerdsvtv" objectUID=18c7985b-0a5c-408a-a977-194d755d1e7a kind="foo8p4k8a" virtual=true
I1124 11:26:57.211205  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]'s doesn't have an owner, continue on next item
I1124 11:26:57.211237  105310 garbagecollector.go:387] item [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a] hasn't been observed via informer yet
I1124 11:26:57.251884  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerdsvtv" objectUID=18c7985b-0a5c-408a-a977-194d755d1e7a kind="foo8p4k8a" virtual=true
I1124 11:26:57.253901  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]'s doesn't have an owner, continue on next item
I1124 11:26:57.253935  105310 garbagecollector.go:387] item [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a] hasn't been observed via informer yet
I1124 11:26:57.334310  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerdsvtv" objectUID=18c7985b-0a5c-408a-a977-194d755d1e7a kind="foo8p4k8a" virtual=true
I1124 11:26:57.335867  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]'s doesn't have an owner, continue on next item
I1124 11:26:57.335892  105310 garbagecollector.go:387] item [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a] hasn't been observed via informer yet
I1124 11:26:57.341376  105310 graph_builder.go:635] GraphBuilder process object: mygroup.example.com/v1beta1/foo8p4k8a, namespace crd-mixed, name ownerdsvtv, uid 18c7985b-0a5c-408a-a977-194d755d1e7a, event type add, virtual=false
I1124 11:26:57.496017  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/ownerdsvtv" objectUID=18c7985b-0a5c-408a-a977-194d755d1e7a kind="foo8p4k8a" virtual=false
I1124 11:26:57.507332  105310 garbagecollector.go:540] object [mygroup.example.com/v1beta1/foo8p4k8a, namespace: crd-mixed, name: ownerdsvtv, uid: 18c7985b-0a5c-408a-a977-194d755d1e7a]'s doesn't have an owner, continue on next item
I1124 11:26:58.750382  105310 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-apiserver-3x5is7hxcstzsabhcc6fukbrv4, uid 8ff889f2-94ee-449c-8be3-eeef7fc86105, event type update, virtual=false
W1124 11:26:59.202942  105310 lease.go:250] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E1124 11:26:59.204377  105310 controller.go:254] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8, ::1/128)
I1124 11:26:59.651856  105310 garbagecollector.go:196] no resource updates from discovery, skipping garbage collector sync
I1124 11:27:04.659218  105310 garbagecollector.go:196] no resource updates from discovery, skipping garbage collector sync
I1124 11:27:07.158232  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 43dd0ee2-5986-4695-93ae-07b92a0446d4, event type update, virtual=false
I1124 11:27:07.173302  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 43dd0ee2-5986-4695-93ae-07b92a0446d4, event type update, virtual=false
I1124 11:27:07.189058  105310 graph_builder.go:635] GraphBuilder process object: mygroup.example.com/v1beta1/foo8p4k8a, namespace crd-mixed, name ownerdsvtv, uid 18c7985b-0a5c-408a-a977-194d755d1e7a, event type delete, virtual=false
I1124 11:27:07.189127  105310 garbagecollector.go:501] "Processing object" object="crd-mixed/dependentmdd8z" objectUID=4d5cc628-dd6f-4c21-a2aa-be9e57a2c3db kind="ConfigMap" virtual=false
I1124 11:27:07.196735  105310 garbagecollector.go:409] according to the absentOwnerCache, object 4d5cc628-dd6f-4c21-a2aa-be9e57a2c3db's owner mygroup.example.com/v1beta1/foo8p4k8a, ownerdsvtv does not exist in namespace crd-mixed
I1124 11:27:07.196761  105310 garbagecollector.go:548] classify references of [v1/ConfigMap, namespace: crd-mixed, name: dependentmdd8z, uid: 4d5cc628-dd6f-4c21-a2aa-be9e57a2c3db].
solid: []v1.OwnerReference(nil)
dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8p4k8a", Name:"ownerdsvtv", UID:"18c7985b-0a5c-408a-a977-194d755d1e7a", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}
waitingForDependentsDeletion: []v1.OwnerReference(nil)
I1124 11:27:07.196799  105310 garbagecollector.go:613] "Deleting object" object="crd-mixed/dependentmdd8z" objectUID=4d5cc628-dd6f-4c21-a2aa-be9e57a2c3db kind="ConfigMap" propagationPolicy=Background
I1124 11:27:07.203778  105310 graph_builder.go:635] GraphBuilder process object: apiextensions.k8s.io/v1/CustomResourceDefinition, namespace , name foo8p4k8as.mygroup.example.com, uid 43dd0ee2-5986-4695-93ae-07b92a0446d4, event type delete, virtual=false
I1124 11:27:07.210408  105310 graph_builder.go:635] GraphBuilder process object: v1/ConfigMap, namespace crd-mixed, name dependentmdd8z, uid 4d5cc628-dd6f-4c21-a2aa-be9e57a2c3db, event type delete, virtual=false
I1124 11:27:07.222736  105310 graph_builder.go:635] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.mygroup.example.com, uid 958f8f74-2daa-4a95-9537-8e33b83f1feb, event type delete, virtual=false
W1124 11:27:08.204873  105310 cacher.go:162] Terminating all watchers from cacher foo8p4k8as.mygroup.example.com
E1124 11:27:08.206022  105310 reflector.go:140] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I1124 11:27:08.845905  105310 graph_builder.go:635] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-apiserver-3x5is7hxcstzsabhcc6fukbrv4, uid 8ff889f2-94ee-449c-8be3-eeef7fc86105, event type update, virtual=false
W1124 11:27:09.196112  105310 lease.go:250] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E1124 11:27:09.197391  105310 controller.go:254] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8, ::1/128)
I1124 11:27:09.665664  105310 garbagecollector.go:172] Shutting down garbage collector controller
I1124 11:27:09.665808  105310 graph_builder.go:319] stopped 56 of 56 monitors
I1124 11:27:09.665820  105310 graph_builder.go:320] GraphBuilder stopping
I1124 11:27:09.666286  105310 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [mygroup.example.com/v1beta1, Resource=foo8p4k8as]
I1124 11:27:09.665969  105310 controller.go:211] Shutting down kubernetes service endpoint reconciler
I1124 11:27:09.666320  105310 garbagecollector.go:226] reset restmapper
E1124 11:27:09.667121  105310 controller.go:214] Unable to remove endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /a13608fd-307f-4ba2-aca3-978d4f964e94/registry/masterleases//127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
I1124 11:27:09.667222  105310 controller.go:89] Shutting down OpenAPI AggregationController
I1124 11:27:09.667242  105310 naming_controller.go:302] Shutting down NamingConditionController
I1124 11:27:09.667256  105310 controller.go:122] Shutting down OpenAPI controller
I1124 11:27:09.667272  105310 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I1124 11:27:09.667291  105310 apf_controller.go:373] Shutting down API Priority and Fairness config worker
I1124 11:27:09.667316  105310 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
I1124 11:27:09.667336  105310 customresource_discovery_controller.go:324] Shutting down DiscoveryController
I1124 11:27:09.667356  105310 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/tmp/kubernetes-kube-apiserver2895620313/client-ca.crt"
I1124 11:27:09.667364  105310 available_controller.go:506] Shutting down AvailableConditionController
I1124 11:27:09.667367  105310 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/tmp/kubernetes-kube-apiserver2895620313/misty-crt.crt::/tmp/kubernetes-kube-apiserver2895620313/misty-crt.key"
I1124 11:27:09.667390  105310 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
I1124 11:27:09.667400  105310 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/tmp/kubernetes-kube-apiserver2895620313/client-ca.crt"
I1124 11:27:09.667404  105310 controller.go:86] Shutting down OpenAPI V3 AggregationController
I1124 11:27:09.667439  105310 storage_flowcontrol.go:179] APF bootstrap ensurer is exiting
I1124 11:27:09.667328  105310 crd_finalizer.go:278] Shutting down CRDFinalizer
I1124 11:27:09.667477  105310 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1124 11:27:09.667505  105310 autoregister_controller.go:165] Shutting down autoregister controller
I1124 11:27:09.667525  105310 controller.go:134] Ending legacy_token_tracking_controller
I1124 11:27:09.667539  105310 controller.go:135] Shutting down legacy_token_tracking_controller
I1124 11:27:09.667554  105310 gc_controller.go:91] Shutting down apiserver lease garbage collector
I1124 11:27:09.667566  105310 controller.go:115] Shutting down OpenAPI V3 controller
I1124 11:27:09.667243  105310 establishing_controller.go:87] Shutting down EstablishingController
I1124 11:27:09.667496  105310 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I1124 11:27:09.667891  105310 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/tmp/kubernetes-kube-apiserver2895620313/apiserver.crt::/tmp/kubernetes-kube-apiserver2895620313/apiserver.key"
I1124 11:27:09.667924  105310 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/tmp/kubernetes-kube-apiserver2895620313/proxy-ca.crt"
I1124 11:27:09.667955  105310 secure_serving.go:255] Stopped listening on 127.0.0.1:32933
I1124 11:27:09.667980  105310 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I1124 11:27:09.667357  105310 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
I1124 11:27:09.673159  105310 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/tmp/kubernetes-kube-apiserver2895620313/proxy-ca.crt"
I1124 11:27:09.673754  105310 controller.go:157] Shutting down quota evaluator
I1124 11:27:09.673780  105310 controller.go:176] quota evaluator worker shutdown
I1124 11:27:09.673895  105310 controller.go:176] quota evaluator worker shutdown
I1124 11:27:09.673915  105310 controller.go:176] quota evaluator worker shutdown
I1124 11:27:09.673923  105310 controller.go:176] quota evaluator worker shutdown
I1124 11:27:09.673930  105310 controller.go:176] quota evaluator worker shutdown
E1124 11:27:09.674447  105310 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 80431 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x45bf200?, 0x84911b0})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc008ac2858?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75
panic({0x45bf200, 0x84911b0})
	/usr/local/go/src/runtime/panic.go:884 +0x212
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xf?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/discovery_client.go:203 +0x5c
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).refreshLocked(0xc00d593080)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:222 +0x57
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).GroupsAndMaybeResources(0xc00d593080)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:128 +0xc5
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery.ServerGroupsAndResources({0x59711c0, 0xc00d593080})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/discovery_client.go:392 +0x59
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).ServerGroupsAndResources(0x16000100eb5290?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:117 +0x25
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.GetAPIGroupResources({0x59711c0?, 0xc00d593080?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:148 +0x42
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.(*DeferredDiscoveryRESTMapper).getDelegate(0xc002bdfd70)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:202 +0xb8
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.(*DeferredDiscoveryRESTMapper).KindFor(0xc002bdfd70, {{0x0, 0x0}, {0xc00dda8308, 0x2}, {0xc00f386100, 0xf}})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:226 +0x75
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GraphBuilder).syncMonitors(0xc008ac2840, 0x10?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:210 +0x48c
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).resyncMonitors(0xc0130458c0, 0x0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:135 +0x25
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync.func1.1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:237 +0x21a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc00edaac00})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x595c8f0?, 0xc004b2ed80?}, 0xc00deb5cd0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x595c8f0, 0xc004b2ed80}, 0x10?, 0xbc2ba5?, 0xc007de9d70?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x595c8f0, 0xc004b2ed80}, 0x50?, 0xc00edaac00?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:547 +0x49
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4552900?, 0xc00d4e8f90?, 0x4552900?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:538 +0x7c
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync.func1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:207 +0x23b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00deb5eb0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x3e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40fae7?, {0x592b520, 0xc0076dac30}, 0x1, 0xc008bbe780)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x41818a0?, 0x12a05f200, 0x0, 0x80?, 0x0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:135 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:92
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync(0xc0130458c0?, {0x7f1f95b50dd0?, 0xc01c4137d0?}, 0x12a05f200?, 0xc008bbe780?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:183 +0xec
created by k8s.io/kubernetes/test/integration/garbagecollector.setupWithServer.func2
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/garbagecollector/garbage_collector_test.go:275 +0x265
E1124 11:27:09.674565  105310 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 80431 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x45bf200?, 0x84911b0})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc013045930?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75
panic({0x45bf200, 0x84911b0})
	/usr/local/go/src/runtime/panic.go:884 +0x212
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc008ac2858?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7
panic({0x45bf200, 0x84911b0})
	/usr/local/go/src/runtime/panic.go:884 +0x212
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xf?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/discovery_client.go:203 +0x5c
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).refreshLocked(0xc00d593080)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:222 +0x57
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).GroupsAndMaybeResources(0xc00d593080)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:128 +0xc5
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery.ServerGroupsAndResources({0x59711c0, 0xc00d593080})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/discovery_client.go:392 +0x59
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).ServerGroupsAndResources(0x16000100eb5290?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:117 +0x25
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.GetAPIGroupResources({0x59711c0?, 0xc00d593080?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:148 +0x42
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.(*DeferredDiscoveryRESTMapper).getDelegate(0xc002bdfd70)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:202 +0xb8
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.(*DeferredDiscoveryRESTMapper).KindFor(0xc002bdfd70, {{0x0, 0x0}, {0xc00dda8308, 0x2}, {0xc00f386100, 0xf}})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:226 +0x75
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GraphBuilder).syncMonitors(0xc008ac2840, 0x10?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:210 +0x48c
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).resyncMonitors(0xc0130458c0, 0x0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:135 +0x25
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync.func1.1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:237 +0x21a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc00edaac00})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x595c8f0?, 0xc004b2ed80?}, 0xc00deb5cd0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x595c8f0, 0xc004b2ed80}, 0x10?, 0xbc2ba5?, 0xc007de9d70?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x595c8f0, 0xc004b2ed80}, 0x50?, 0xc00edaac00?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:547 +0x49
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4552900?, 0xc00d4e8f90?, 0x4552900?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:538 +0x7c
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync.func1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:207 +0x23b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00deb5eb0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x3e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40fae7?, {0x592b520, 0xc0076dac30}, 0x1, 0xc008bbe780)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x41818a0?, 0x12a05f200, 0x0, 0x80?, 0x0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:135 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:92
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync(0xc0130458c0?, {0x7f1f95b50dd0?, 0xc01c4137d0?}, 0x12a05f200?, 0xc008bbe780?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:183 +0xec
created by k8s.io/kubernetes/test/integration/garbagecollector.setupWithServer.func2
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/garbagecollector/garbage_collector_test.go:275 +0x265
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x167b37c]

goroutine 80431 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc013045930?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7
panic({0x45bf200, 0x84911b0})
	/usr/local/go/src/runtime/panic.go:884 +0x212
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc008ac2858?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7
panic({0x45bf200, 0x84911b0})
	/usr/local/go/src/runtime/panic.go:884 +0x212
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xf?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/discovery_client.go:203 +0x5c
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).refreshLocked(0xc00d593080)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:222 +0x57
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).GroupsAndMaybeResources(0xc00d593080)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:128 +0xc5
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery.ServerGroupsAndResources({0x59711c0, 0xc00d593080})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/discovery_client.go:392 +0x59
k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory.(*memCacheClient).ServerGroupsAndResources(0x16000100eb5290?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go:117 +0x25
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.GetAPIGroupResources({0x59711c0?, 0xc00d593080?})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:148 +0x42
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.(*DeferredDiscoveryRESTMapper).getDelegate(0xc002bdfd70)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:202 +0xb8
k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper.(*DeferredDiscoveryRESTMapper).KindFor(0xc002bdfd70, {{0x0, 0x0}, {0xc00dda8308, 0x2}, {0xc00f386100, 0xf}})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/restmapper/discovery.go:226 +0x75
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GraphBuilder).syncMonitors(0xc008ac2840, 0x10?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:210 +0x48c
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).resyncMonitors(0xc0130458c0, 0x0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:135 +0x25
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync.func1.1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:237 +0x21a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc00edaac00})
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x595c8f0?, 0xc004b2ed80?}, 0xc00deb5cd0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x595c8f0, 0xc004b2ed80}, 0x10?, 0xbc2ba5?, 0xc007de9d70?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x595c8f0, 0xc004b2ed80}, 0x50?, 0xc00edaac00?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:547 +0x49
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4552900?, 0xc00d4e8f90?, 0x4552900?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:538 +0x7c
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync.func1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:207 +0x23b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00deb5eb0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x3e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40fae7?, {0x592b520, 0xc0076dac30}, 0x1, 0xc008bbe780)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x41818a0?, 0x12a05f200, 0x0, 0x80?, 0x0?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:135 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:92
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Sync(0xc0130458c0?, {0x7f1f95b50dd0?, 0xc01c4137d0?}, 0x12a05f200?, 0xc008bbe780?)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:183 +0xec
created by k8s.io/kubernetes/test/integration/garbagecollector.setupWithServer.func2
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/garbagecollector/garbage_collector_test.go:275 +0x265

				from junit_20221124-110836.xml

Filter through log files | View test history on testgrid


Show 4839 Passed Tests

Show 31 Skipped Tests

Error lines from build-log.txt

... skipping 50 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 164: bogus-expected-to-fail: command not found
!!! [1124 10:56:30] Call tree:
!!! [1124 10:56:30]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [1124 10:56:30]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [1124 10:56:30]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:140 juLog(...)
!!! [1124 10:56:30]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:168 record_command(...)
!!! [1124 10:56:30]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [1124 10:56:30] Running kubeadm tests
+++ [1124 10:56:31] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
+++ [1124 10:57:30] Running tests without code coverage 
{"Time":"2022-11-24T10:58:08.732910012Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t35.738s\n"}
✓  cmd/kubeadm/test/cmd (35.74s)
... skipping 220 lines ...
+++ [1124 11:00:39] Building kube-controller-manager
+++ [1124 11:00:40] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [1124 11:01:10] Generate kubeconfig for controller-manager
+++ [1124 11:01:10] Starting controller-manager
I1124 11:01:10.625560   44295 serving.go:348] Generated self-signed cert in-memory
W1124 11:01:11.418775   44295 authentication.go:426] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1124 11:01:11.418812   44295 authentication.go:320] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W1124 11:01:11.418820   44295 authentication.go:344] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W1124 11:01:11.418834   44295 authorization.go:226] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1124 11:01:11.418847   44295 authorization.go:194] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I1124 11:01:11.419337   44295 controllermanager.go:182] Version: v1.27.0-alpha.0.46+8f2371bcceff79
I1124 11:01:11.419374   44295 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1124 11:01:11.420700   44295 secure_serving.go:210] Serving securely on [::]:10257
I1124 11:01:11.420823   44295 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1124 11:01:11.421063   44295 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 25 lines ...
I1124 11:01:11.453863   44295 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key"
I1124 11:01:11.454368   44295 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
I1124 11:01:11.454399   44295 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-client
I1124 11:01:11.454433   44295 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key"
I1124 11:01:11.454450   44295 controllermanager.go:622] Started "csrsigning"
W1124 11:01:11.454479   44295 controllermanager.go:587] "tokencleaner" is disabled
E1124 11:01:11.454672   44295 core.go:207] failed to start cloud node lifecycle controller: no cloud provider provided
W1124 11:01:11.454695   44295 controllermanager.go:600] Skipping "cloud-node-lifecycle"
I1124 11:01:11.454728   44295 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
I1124 11:01:11.454745   44295 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I1124 11:01:11.454772   44295 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key"
W1124 11:01:11.454899   44295 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:01:11.454983   44295 controllermanager.go:622] Started "clusterrole-aggregation"
... skipping 13 lines ...
W1124 11:01:11.455983   44295 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1124 11:01:11.456100   44295 controllermanager.go:622] Started "job"
W1124 11:01:11.456112   44295 controllermanager.go:600] Skipping "nodeipam"
I1124 11:01:11.456249   44295 job_controller.go:191] Starting job controller
I1124 11:01:11.456270   44295 shared_informer.go:273] Waiting for caches to sync for job
W1124 11:01:11.456347   44295 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
E1124 11:01:11.456442   44295 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1124 11:01:11.456456   44295 controllermanager.go:600] Skipping "service"
I1124 11:01:11.456731   44295 controllermanager.go:622] Started "pvc-protection"
I1124 11:01:11.456904   44295 pvc_protection_controller.go:99] "Starting PVC protection controller"
I1124 11:01:11.456928   44295 shared_informer.go:273] Waiting for caches to sync for PVC protection
I1124 11:01:11.457067   44295 controllermanager.go:622] Started "ttl-after-finished"
I1124 11:01:11.457207   44295 ttlafterfinished_controller.go:104] Starting TTL after finished controller
... skipping 156 lines ...
I1124 11:01:11.669100   44295 shared_informer.go:280] Caches are synced for cronjob
I1124 11:01:11.854283   44295 shared_informer.go:280] Caches are synced for stateful set
I1124 11:01:11.868567   44295 shared_informer.go:280] Caches are synced for resource quota
I1124 11:01:11.883965   44295 shared_informer.go:280] Caches are synced for disruption
I1124 11:01:11.891513   44295 shared_informer.go:280] Caches are synced for resource quota
node/127.0.0.1 created
W1124 11:01:12.165897   44295 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [1124 11:01:12] Checking kubectl version
I1124 11:01:12.215289   44295 shared_informer.go:280] Caches are synced for garbage collector
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-alpha.0.46+8f2371bcceff79", GitCommit:"8f2371bcceff7962ddb4901c36536c6ff659755b", GitTreeState:"clean", BuildDate:"2022-11-24T08:30:04Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-alpha.0.46+8f2371bcceff79", GitCommit:"8f2371bcceff7962ddb4901c36536c6ff659755b", GitTreeState:"clean", BuildDate:"2022-11-24T08:30:04Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
I1124 11:01:12.285112   44295 shared_informer.go:280] Caches are synced for garbage collector
I1124 11:01:12.285143   44295 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   33s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [1124 11:01:17] Creating namespace namespace-1669287677-16558
namespace/namespace-1669287677-16558 created
Context "test" modified.
+++ [1124 11:01:17] Testing RESTMapper
+++ [1124 11:01:18] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 60 lines ...
namespace/namespace-1669287680-905 created
Context "test" modified.
+++ [1124 11:01:20] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1669287688-15843 created
Context "test" modified.
+++ [1124 11:01:28] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 439 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -lname=valid-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-kubectl-describe-pod" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I1124 11:01:38.874720   49050 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I1124 11:01:38.876528   49050 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3D46a9aa4f-b37f-4e95-ab70-c097f5d6b9bd&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:3.9:
(BSuccessful
(Bmessage:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [1124 11:01:55] "kubectl patch with resourceVersion 592" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
(Bmessage:kubectl-replace
has:kubectl-replace
Successful
(Bmessage:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
(Bmessage:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W1124 11:01:56.077358   44295 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(BI1124 11:01:56.576659   44295 event.go:294] "Event occurred" object="node-v1-test" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node node-v1-test event: Registered Node node-v1-test in Controller"
node/node-v1-test replaced
... skipping 30 lines ...
spec:
  containers:
  - image: registry.k8s.io/pause:3.9
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [1124 11:02:05] Creating namespace namespace-1669287725-24108
namespace/namespace-1669287725-24108 created
Context "test" modified.
+++ [1124 11:02:05] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 63 lines ...
	If true, keep the managedFields when printing objects in JSON or YAML format.

    --template='':
	Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

    --validate='strict':
	Must be one of: strict (or true), warn, ignore (or false). 		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. 		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. 		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

    --windows-line-endings=false:
	Only relevant if --edit=true. Defaults to the line ending native to your platform.

Usage:
  kubectl create -f FILENAME [options]
... skipping 38 lines ...
I1124 11:02:08.491612   44295 event.go:294] "Event occurred" object="namespace-1669287725-18606/test-deployment-retainkeys-9f5d74f4f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-9f5d74f4f-drr64"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/test-pod created (dry run)
pod/test-pod created (server dry run)
apply.sh:107: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 20 lines ...
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
namespace/nsb created
apply.sh:181: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:184: Successful get pods a -n nsb {{.metadata.name}}: a
(BW1124 11:02:14.230604   42319 cacher.go:162] Terminating all watchers from cacher resources.mygroup.example.com
E1124 11:02:14.232029   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
pod/b created
W1124 11:02:14.816727   52942 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
pod/a pruned
W1124 11:02:15.415346   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:02:15.415392   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apply.sh:188: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bpod "b" deleted
apply.sh:195: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:200: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:202: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/b created
apply.sh:207: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:208: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
(Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
I1124 11:02:17.725444   42319 alloc.go:327] "allocated clusterIPs" service="namespace-1669287725-18606/prune-svc" clusterIPs=map[IPv4:10.0.0.195]
service/prune-svc created
W1124 11:02:17.725987   53114 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
W1124 11:02:17.833215   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:02:17.833259   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1124 11:02:20.004556   44295 horizontal.go:452] Horizontal Pod Autoscaler frontend has been deleted in namespace-1669287723-21450
apply.sh:220: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:221: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
namespace "nsb" deleted
W1124 11:02:22.982317   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:02:22.982354   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
persistentvolumeclaim/a-pvc created
W1124 11:02:27.443413   53188 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
I1124 11:02:27.443779   44295 event.go:294] "Event occurred" object="namespace-1669287725-18606/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
I1124 11:02:27.460412   44295 event.go:294] "Event occurred" object="namespace-1669287725-18606/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"
service/prune-svc pruned
apply.sh:228: Successful get pvc a-pvc {{.metadata.name}}: a-pvc
... skipping 25 lines ...
(Bservice "prune-svc" deleted
namespace/nsb created
apply.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:258: Successful get pods a -n nsb {{.metadata.name}}: a
(Bpod/b created
W1124 11:02:35.189351   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:02:35.189402   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apply.sh:261: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
W1124 11:02:35.358944   53482 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
pod/a pruned
apply.sh:265: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
I1124 11:02:42.010111   44295 shared_informer.go:273] Waiting for caches to sync for resource quota
I1124 11:02:42.010158   44295 shared_informer.go:280] Caches are synced for resource quota
I1124 11:02:42.241418   44295 shared_informer.go:273] Waiting for caches to sync for garbage collector
I1124 11:02:42.241471   44295 shared_informer.go:280] Caches are synced for garbage collector
Successful
(Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:276: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:280: Successful get services a {{.metadata.name}}: a
(BSuccessful
(Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:302: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:303: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
(Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:311: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
(Bmessage:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:319: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I1124 11:02:47.049518   44295 namespace_controller.go:180] Namespace has been deleted nsb
apply.sh:325: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:configmap/foo created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:331: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:337: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
... skipping 6 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:345: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:349: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Widget" in version "example.com/v1"
customresourcedefinition.apiextensions.k8s.io/widgets.example.com condition met
Successful
(Bmessage:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI1124 11:02:54.239150   42319 controller.go:615] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
apply.sh:359: Successful get widget foo {{.metadata.name}}: foo
... skipping 29 lines ...
pod/test-pod serverside-applied (server dry run)
apply.sh:405: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
(BSuccessful
(Bmessage:872
has:872
pod "test-pod" deleted
W1124 11:02:57.397777   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:02:57.397821   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [1124 11:02:57] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 150 lines ...
(Bpod "nginx-extensions" deleted
Successful
(Bmessage:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
(Bmessage:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [1124 11:03:04] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I1124 11:03:07.237096   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-67d7f59574 to 3"
I1124 11:03:07.260614   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx-67d7f59574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67d7f59574-vk4f7"
I1124 11:03:07.282243   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx-67d7f59574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67d7f59574-qll7v"
I1124 11:03:07.282279   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx-67d7f59574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67d7f59574-mptkq"
apps.sh:183: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
(Bmessage:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1669287785-10829\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"registry.k8s.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1669287785-10829"
for: "hack/testdata/deployment-label-change2.yaml": error when patching "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I1124 11:03:15.806528   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-df5468db9 to 3"
I1124 11:03:15.823168   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx-df5468db9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-df5468db9-b8zck"
I1124 11:03:15.838923   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx-df5468db9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-df5468db9-289nk"
I1124 11:03:15.838959   44295 event.go:294] "Event occurred" object="namespace-1669287785-10829/nginx-df5468db9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-df5468db9-9v9fc"
Successful
... skipping 371 lines ...
+++ [1124 11:03:28] Creating namespace namespace-1669287808-8249
namespace/namespace-1669287808-8249 created
Context "test" modified.
+++ [1124 11:03:28] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:{
    "apiVersion": "v1",
    "items": [],
... skipping 21 lines ...
has not:No resources found
Successful
(Bmessage:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1669287808-8249 namespace.
has:No resources found
Successful
(Bmessage:
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1669287808-8249 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:Error from server (NotFound): pods "abc" not found
has not:List
Successful
(Bmessage:I1124 11:03:29.978311   56586 loader.go:373] Config loaded from file:  /tmp/tmp.0As9P3mIqI/.kube/config
I1124 11:03:29.983543   56586 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I1124 11:03:29.999267   56586 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I1124 11:03:30.000843   56586 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 520 lines ...
get.sh:137: Successful get configmaps {{range.items}}{{ if eq .metadata.name "one" }}found{{end}}{{end}}:: :
(Bget.sh:138: Successful get configmaps {{range.items}}{{ if eq .metadata.name "two" }}found{{end}}{{end}}:: :
(Bget.sh:139: Successful get configmaps {{range.items}}{{ if eq .metadata.name "three" }}found{{end}}{{end}}:: :
(Bconfigmap/one created
configmap/two created
configmap/three created
W1124 11:03:35.769428   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:03:35.769462   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:NAME               DATA   AGE
kube-root-ca.crt   1      7s
one                0      1s
three              0      0s
two                0      0s
... skipping 64 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
(Bmessage:valid-pod:
has:valid-pod:
Successful
(Bmessage:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2022-11-24T11:03:37Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2022-11-24T11:03:37Z"}}, "name":"valid-pod", "namespace":"namespace-1669287817-8212", "resourceVersion":"1076", "uid":"efccdda8-58db-496c-ad28-1196c17a1565"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"registry.k8s.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
(Bmessage:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2022-11-24T11:03:37Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2022-11-24T11:03:37Z"}],"name":"valid-pod","namespace":"namespace-1669287817-8212","resourceVersion":"1076","uid":"efccdda8-58db-496c-ad28-1196c17a1565"},"spec":{"containers":[{"image":"registry.k8s.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2022-11-24T11:03:37Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2022-11-24T11:03:37Z]] name:valid-pod namespace:namespace-1669287817-8212 resourceVersion:1076 uid:efccdda8-58db-496c-ad28-1196c17a1565] spec:map[containers:[map[image:registry.k8s.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:Error from server (NotFound): the server could not find the requested resource
has:the server could not find the requested resource
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:STATUS
Successful
... skipping 78 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
(Bmessage:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 249 lines ...
+++ [1124 11:03:50] Creating namespace namespace-1669287830-27163
namespace/namespace-1669287830-27163 created
Context "test" modified.
+++ [1124 11:03:50] Testing kubectl exec POD COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [1124 11:03:51] Creating namespace namespace-1669287831-19656
namespace/namespace-1669287831-19656 created
Context "test" modified.
+++ [1124 11:03:51] Testing kubectl exec TYPE/NAME COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I1124 11:03:52.155172   44295 event.go:294] "Event occurred" object="namespace-1669287831-19656/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-ff8wj"
I1124 11:03:52.173952   44295 event.go:294] "Event occurred" object="namespace-1669287831-19656/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-nw5nl"
I1124 11:03:52.173984   44295 event.go:294] "Event occurred" object="namespace-1669287831-19656/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-cdnjb"
configmap/test-set-env-config created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-cdnjb does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-cdnjb does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(Bmessage:user-specified
has:user-specified
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(B{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"a9ed9a97-ae51-48eb-a73e-a1ccbfa8b592","resourceVersion":"1176","creationTimestamp":"2022-11-24T11:03:53Z"}}
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"a9ed9a97-ae51-48eb-a73e-a1ccbfa8b592","resourceVersion":"1177","creationTimestamp":"2022-11-24T11:03:53Z"},"data":{"key1":"config1"}}
has:uid
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"a9ed9a97-ae51-48eb-a73e-a1ccbfa8b592","resourceVersion":"1177","creationTimestamp":"2022-11-24T11:03:53Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"a9ed9a97-ae51-48eb-a73e-a1ccbfa8b592"}}
Successful
(Bmessage:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 25 lines ...
+++ command: run_kubectl_create_validate_tests
+++ [1124 11:03:54] Creating namespace namespace-1669287834-7589
namespace/namespace-1669287834-7589 created
Context "test" modified.
+++ [1124 11:03:54] Testing kubectl create --validate
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [1124 11:03:54] Testing kubectl create --validate=true
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [1124 11:03:55] Testing kubectl create --validate=false
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I1124 11:03:55.135210   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-85996f8dbd to 4"
I1124 11:03:55.162741   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-9jgb4"
I1124 11:03:55.179874   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-qjpgw"
I1124 11:03:55.179914   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-ln4h7"
I1124 11:03:55.196894   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-nm9qn"
deployment.apps "invalid-nginx-deployment" deleted
+++ [1124 11:03:55] Testing kubectl create --validate=strict
E1124 11:03:55.234891   44295 replica_set.go:544] sync "namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" failed with replicasets.apps "invalid-nginx-deployment-85996f8dbd" not found
E1124 11:03:55.238804   44295 replica_set.go:544] sync "namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" failed with replicasets.apps "invalid-nginx-deployment-85996f8dbd" not found
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [1124 11:03:55] Testing kubectl create --validate=warn
Warning: unknown field "spec.baz"
Warning: unknown field "spec.foo"
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I1124 11:03:55.594122   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-85996f8dbd to 4"
I1124 11:03:55.612450   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-2lkdp"
I1124 11:03:55.628554   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-xrdkh"
I1124 11:03:55.628595   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-c98l7"
I1124 11:03:55.661595   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-4tk52"
deployment.apps "invalid-nginx-deployment" deleted
+++ [1124 11:03:55] Testing kubectl create --validate=ignore
E1124 11:03:55.694399   44295 replica_set.go:544] sync "namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" failed with replicasets.apps "invalid-nginx-deployment-85996f8dbd" not found
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I1124 11:03:55.769990   44295 namespace_controller.go:180] Namespace has been deleted test-events
I1124 11:03:55.775903   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-85996f8dbd to 4"
I1124 11:03:55.793948   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-8j4b4"
I1124 11:03:55.813632   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-mkvwm"
I1124 11:03:55.813756   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-9mls9"
I1124 11:03:55.831288   44295 event.go:294] "Event occurred" object="namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-85996f8dbd-rjfg9"
deployment.apps "invalid-nginx-deployment" deleted
+++ [1124 11:03:55] Testing kubectl create
E1124 11:03:55.889641   44295 replica_set.go:544] sync "namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd" failed with Operation cannot be fulfilled on replicasets.apps "invalid-nginx-deployment-85996f8dbd": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1669287834-7589/invalid-nginx-deployment-85996f8dbd, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 86b7ee61-9435-436f-b7ba-88e68168999e, UID in object meta: 
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [1124 11:03:56] Testing kubectl create --validate=foo
Successful
(Bmessage:error: invalid - validate option "foo"; must be one of: strict (or true), warn, ignore (or false)
has:invalid - validate option "foo"
+++ exit code: 0
Recording: run_convert_tests
Running command: run_convert_tests

+++ Running case: test-cmd.run_convert_tests 
... skipping 50 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
(Bmessage:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 103 lines ...
has:Timeout
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
(Bmessage:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 149 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:296: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:298: Successful get foos/test {{.patched}}: <no value>
(B+++ [1124 11:04:07] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 328 lines ...
(Bcrd.sh:519: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:524: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:527: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
Recording: run_recursive_resources_tests
... skipping 5 lines ...
+++ [1124 11:04:24] Testing recursive resources
+++ [1124 11:04:24] Creating namespace namespace-1669287864-122
namespace/namespace-1669287864-122 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW1124 11:04:24.490689   42319 cacher.go:162] Terminating all watchers from cacher foos.company.com
E1124 11:04:24.491926   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1124 11:04:24.674692   42319 cacher.go:162] Terminating all watchers from cacher bars.company.com
E1124 11:04:24.675999   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1124 11:04:24.833109   42319 cacher.go:162] Terminating all watchers from cacher resources.mygroup.example.com
E1124 11:04:24.834402   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1124 11:04:25.026008   42319 cacher.go:162] Terminating all watchers from cacher validfoos.company.com
E1124 11:04:25.027289   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW1124 11:04:25.489220   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:25.489256   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1124 11:04:25.710774   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:25.710812   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:Name:         busybox0
Namespace:    namespace-1669287864-122
Priority:     0
Node:         <none>
... skipping 155 lines ...
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW1124 11:04:26.587744   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:26.587794   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
(Bmessage:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
(Bmessage:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
(Bmessage:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
W1124 11:04:27.862210   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:27.862256   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BW1124 11:04:28.025654   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:28.025696   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1124 11:04:28.117599   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-pjl2k"
I1124 11:04:28.134968   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-q27z5"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BI1124 11:04:28.233251   44295 namespace_controller.go:180] Namespace has been deleted non-native-resources
generic-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(BSuccessful
(Bmessage:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI1124 11:04:29.119605   42319 alloc.go:327] "allocated clusterIPs" service="namespace-1669287864-122/busybox0" clusterIPs=map[IPv4:10.0.0.87]
I1124 11:04:29.183954   42319 alloc.go:327] "allocated clusterIPs" service="namespace-1669287864-122/busybox1" clusterIPs=map[IPv4:10.0.0.45]
generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
(Bmessage:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(BW1124 11:04:29.510496   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:29.510534   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI1124 11:04:29.650584   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-8wgjd"
I1124 11:04:29.685001   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-4ht5r"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
(Bmessage:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I1124 11:04:30.354631   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/nginx1-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-545cdb7b5d to 2"
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1124 11:04:30.369437   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/nginx1-deployment-545cdb7b5d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-545cdb7b5d-f54nb"
I1124 11:04:30.375722   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/nginx0-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-55fcbfdf5c to 2"
I1124 11:04:30.384346   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/nginx1-deployment-545cdb7b5d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-545cdb7b5d-pxhkc"
I1124 11:04:30.390578   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/nginx0-deployment-55fcbfdf5c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-55fcbfdf5c-k6wkt"
I1124 11:04:30.429615   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/nginx0-deployment-55fcbfdf5c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-55fcbfdf5c-62kjb"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(BSuccessful
(Bmessage:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
(Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
generic-resources.sh:384: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
(Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
W1124 11:04:31.814775   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:31.814831   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Waiting for deployment "nginx1-deployment" rollout to finish
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
W1124 11:04:32.930449   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:32.930485   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 18 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W1124 11:04:34.414247   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:34.414282   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:411: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I1124 11:04:35.676740   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-qcbcj"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1124 11:04:35.699713   44295 event.go:294] "Event occurred" object="namespace-1669287864-122/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-q6b5v"
generic-resources.sh:415: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
+++ exit code: 0
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [1124 11:04:37] Testing kubectl(v1:namespaces)
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1471: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bquery for namespaces had limit param
query for resourcequotas had limit param
query for limitranges had limit param
... skipping 132 lines ...
I1124 11:04:37.910111   62277 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1669287864-122/resourcequotas?limit=500 200 OK in 1 milliseconds
I1124 11:04:37.911320   62277 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1669287864-122/limitranges?limit=500 200 OK in 1 milliseconds
I1124 11:04:37.912807   62277 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb 200 OK in 1 milliseconds
I1124 11:04:37.914069   62277 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/resourcequotas?limit=500 200 OK in 1 milliseconds
I1124 11:04:37.915192   62277 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/limitranges?limit=500 200 OK in 1 milliseconds
(Bnamespace "my-namespace" deleted
W1124 11:04:40.126553   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:40.126591   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1124 11:04:42.133965   44295 shared_informer.go:273] Waiting for caches to sync for resource quota
I1124 11:04:42.134031   44295 shared_informer.go:280] Caches are synced for resource quota
I1124 11:04:42.275368   44295 shared_informer.go:273] Waiting for caches to sync for garbage collector
I1124 11:04:42.275422   44295 shared_informer.go:280] Caches are synced for garbage collector
W1124 11:04:42.380666   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:42.380702   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
I1124 11:04:43.511598   44295 horizontal.go:452] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1669287864-122
I1124 11:04:43.522819   44295 horizontal.go:452] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1669287864-122
namespace/my-namespace created
core.sh:1482: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
... skipping 36 lines ...
namespace "namespace-1669287837-5609" deleted
namespace "namespace-1669287838-333" deleted
namespace "namespace-1669287840-6839" deleted
namespace "namespace-1669287841-32317" deleted
namespace "namespace-1669287864-122" deleted
namespace "nsb" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:Warning: deleting cluster-scoped resources
Successful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1669287674-17791" deleted
... skipping 32 lines ...
namespace "namespace-1669287837-5609" deleted
namespace "namespace-1669287838-333" deleted
namespace "namespace-1669287840-6839" deleted
namespace "namespace-1669287841-32317" deleted
namespace "namespace-1669287864-122" deleted
namespace "nsb" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1489: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1490: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name "test-quota" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
... skipping 7 lines ...
I1124 11:04:44.823878   62479 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I1124 11:04:44.832302   62479 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas?limit=500 200 OK in 1 milliseconds
I1124 11:04:44.834734   62479 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas/test-quota 200 OK in 1 milliseconds
(BI1124 11:04:44.979920   44295 resource_quota_controller.go:315] Resource quota has been deleted quotas/test-quota
resourcequota "test-quota" deleted
namespace "quotas" deleted
W1124 11:04:45.100591   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:45.100631   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1511: Successful get namespaces {{range.items}}{{ if eq .metadata.name "other" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1515: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1519: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1523: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1525: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
(Bmessage:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1532: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1536: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 5 lines ...
I1124 11:04:54.073704   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287675-21458
I1124 11:04:54.125455   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287680-905
I1124 11:04:54.216222   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287688-15843
I1124 11:04:54.245480   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287690-5275
I1124 11:04:54.261712   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287691-12565
I1124 11:04:54.272402   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287694-4346
W1124 11:04:54.486279   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:54.486317   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1124 11:04:54.499381   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287695-16270
I1124 11:04:54.641299   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287695-17391
I1124 11:04:54.662474   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287706-15769
I1124 11:04:54.669094   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287719-4786
I1124 11:04:54.698874   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287719-5139
I1124 11:04:54.716506   44295 namespace_controller.go:180] Namespace has been deleted namespace-1669287707-7810
... skipping 93 lines ...
core.sh:871: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:875: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:876: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
core.sh:886: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BW1124 11:04:59.304820   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:04:59.304859   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
core.sh:889: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:890: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/test-secret created
core.sh:896: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:897: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/secret-string-data created
core.sh:919: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:920: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(BW1124 11:05:00.185549   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:05:00.185589   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:921: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I1124 11:05:02.192030   44295 namespace_controller.go:180] Namespace has been deleted other
W1124 11:05:04.056405   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:05:04.056450   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 43 lines ...
+++ command: run_client_config_tests
+++ [1124 11:05:13] Creating namespace namespace-1669287913-31347
namespace/namespace-1669287913-31347 created
Context "test" modified.
+++ [1124 11:05:13] Testing client config
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
(Bmessage:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
(Bmessage:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
(Bmessage:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "vendor/k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
(Bmessage:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 57 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 55 lines ...
Annotations:      batch.kubernetes.io/job-tracking: 
                  cronjob.kubernetes.io/instantiate: manual
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Thu, 24 Nov 2022 11:05:21 +0000
Pods Statuses:    1 Active (0 Ready) / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=7299f00a-1c43-4c19-a0a3-8c8484d5cd4c
           job-name=test-job
  Containers:
   pi:
    Image:      registry.k8s.io/perl
... skipping 464 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
(Bmessage:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
W1124 11:05:32.324118   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:05:32.324154   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI1124 11:05:32.742183   44295 namespace_controller.go:180] Namespace has been deleted test-jobs
... skipping 9 lines ...
(Bservice "redis-master" deleted
service "service-v1-test" deleted
core.sh:1102: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1106: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI1124 11:05:34.096942   42319 alloc.go:327] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.45]
service/redis-master created
W1124 11:05:34.239917   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:05:34.239954   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1124 11:05:34.293129   42319 alloc.go:327] "allocated clusterIPs" service="default/redis-slave" clusterIPs=map[IPv4:10.0.0.188]
service/redis-slave created
core.sh:1111: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(BSuccessful
(Bmessage:NAME           RSRC
kubernetes     192
... skipping 21 lines ...
pod/testmetadata created (server dry run)
core.sh:1162: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI1124 11:05:35.903255   42319 alloc.go:327] "allocated clusterIPs" service="default/testmetadata" clusterIPs=map[IPv4:10.0.0.94]
service/testmetadata created
pod/testmetadata created
core.sh:1166: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
(BW1124 11:05:36.011554   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:05:36.011591   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1167: Successful get service testmetadata {{(index .spec.ports 0).port}}: 80
(BSuccessful
(Bmessage:kubectl-run
has:kubectl-run
I1124 11:05:36.219959   42319 alloc.go:327] "allocated clusterIPs" service="default/exposemetadata" clusterIPs=map[IPv4:10.0.0.245]
service/exposemetadata exposed
... skipping 251 lines ...
(Bmessage:daemonset.apps/bind 
REVISION  CHANGE-CAUSE
2         kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
3         kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
has:3         kubectl apply
Successful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:122: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0:
(Bapps.sh:123: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
apps.sh:126: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:latest:
(Bapps.sh:127: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
... skipping 60 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1669287941-13492
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E1124 11:05:43.249322   44295 replica_set.go:220] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1669287941-13492  72494c6e-a7bb-444c-98af-7141757f5d28 2210 2 2022-11-24 11:05:42 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2022-11-24 11:05:42 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2022-11-24 11:05:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0038c60a8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I1124 11:05:43.281976   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-7ggz2"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I1124 11:05:43.695859   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-mn9r8"
core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 32 lines ...
I1124 11:05:45.397769   42319 alloc.go:327] "allocated clusterIPs" service="namespace-1669287941-13492/expose-test-deployment" clusterIPs=map[IPv4:10.0.0.66]
Successful
(Bmessage:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
(Bmessage:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I1124 11:05:45.781550   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6686477968 to 3"
I1124 11:05:45.792946   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-6686477968" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6686477968-9lw9c"
I1124 11:05:45.813944   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-6686477968" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6686477968-m7pbj"
I1124 11:05:45.813978   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-6686477968" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6686477968-npgpb"
... skipping 24 lines ...
(Bpod "valid-pod" deleted
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
Successful
(Bmessage:error: cannot expose a Node
has:cannot expose
Successful
(Bmessage:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
I1124 11:05:47.867778   42319 alloc.go:327] "allocated clusterIPs" service="namespace-1669287941-13492/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs=map[IPv4:10.0.0.155]
Successful
... skipping 32 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1403: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1407: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1416: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I1124 11:05:50.792846   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-f677cc669 to 3"
I1124 11:05:50.812127   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources-f677cc669" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-f677cc669-5rzrj"
I1124 11:05:50.829303   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources-f677cc669" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-f677cc669-tfjmm"
I1124 11:05:50.829335   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources-f677cc669" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-f677cc669-jn88p"
core.sh:1422: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1423: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bcore.sh:1424: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I1124 11:05:51.138670   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-59677b8c47 to 1"
I1124 11:05:51.159119   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources-59677b8c47" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-59677b8c47-x485k"
core.sh:1427: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1428: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I1124 11:05:51.477614   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-f677cc669 to 2 from 3"
I1124 11:05:51.502239   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-5cfd6dc9b9 to 1 from 0"
I1124 11:05:51.515949   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources-5cfd6dc9b9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5cfd6dc9b9-c7kh6"
core.sh:1433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(BI1124 11:05:51.521953   44295 event.go:294] "Event occurred" object="namespace-1669287941-13492/nginx-deployment-resources-f677cc669" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-f677cc669-tfjmm"
... skipping 155 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1444: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1445: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1446: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=7c54d4b896
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=7c54d4b896
  Containers:
   nginx:
    Image:        registry.k8s.io/nginx:test-cmd
... skipping 44 lines ...
I1124 11:05:53.693539   68724 loader.go:373] Config loaded from file:  /tmp/tmp.0As9P3mIqI/.kube/config
I1124 11:05:53.698638   68724 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I1124 11:05:53.705604   68724 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1669287952-3730/deployments?limit=500 200 OK in 1 milliseconds
I1124 11:05:53.707875   68724 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1669287952-3730/deployments/test-nginx-apps 200 OK in 1 milliseconds
I1124 11:05:53.711188   68724 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1669287952-3730/events?fieldSelector=involvedObject.name%3Dtest-nginx-apps%2CinvolvedObject.namespace%3Dnamespace-1669287952-3730%2CinvolvedObject.kind%3DDeployment%2CinvolvedObject.uid%3Dc378ac21-3581-4d93-bdf6-2f3628eb720b&limit=500 200 OK in 1 milliseconds
I1124 11:05:53.712981   68724 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1669287952-3730/replicasets?labelSelector=app%3Dtest-nginx-apps&limit=500 200 OK in 1 milliseconds
(BW1124 11:05:53.818557   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:05:53.818594   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-nginx-apps" deleted
apps.sh:251: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-with-command created (dry run)
deployment.apps/nginx-with-command created (server dry run)
apps.sh:255: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-with-command created
... skipping 67 lines ...
apps.sh:340: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(B    Image:	registry.k8s.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:344: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I1124 11:06:01.819051   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-7c49bd5b4 to 2 from 3"
I1124 11:06:01.842371   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-7c49bd5b4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-7c49bd5b4-x7875"
I1124 11:06:01.848985   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-7d5d79b57b to 1 from 0"
I1124 11:06:01.878650   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-7d5d79b57b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-7d5d79b57b-8c4ts"
Successful
... skipping 61 lines ...
deployment.apps/nginx2 created
I1124 11:06:03.198164   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx2-f4898fb74 to 3"
I1124 11:06:03.216465   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx2-f4898fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-f4898fb74-5lvmc"
I1124 11:06:03.239902   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx2-f4898fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-f4898fb74-7747r"
I1124 11:06:03.239934   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx2-f4898fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-f4898fb74-xnqhh"
deployment.apps "nginx2" deleted
E1124 11:06:03.300775   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx2-f4898fb74" failed with replicasets.apps "nginx2-f4898fb74" not found
deployment.apps "nginx" deleted
apps.sh:389: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I1124 11:06:03.638049   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7f4655b8db to 3"
I1124 11:06:03.660179   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-7f4655b8db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7f4655b8db-hc9tt"
I1124 11:06:03.679574   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-7f4655b8db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7f4655b8db-7w7dg"
... skipping 7 lines ...
(Bapps.sh:399: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I1124 11:06:04.355440   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-5dc5bd75c8 to 1"
I1124 11:06:04.381104   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-5dc5bd75c8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5dc5bd75c8-9s5t9"
apps.sh:402: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bapps.sh:403: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:408: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(BI1124 11:06:04.857419   44295 horizontal.go:452] Horizontal Pod Autoscaler frontend has been deleted in namespace-1669287941-13492
apps.sh:409: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
... skipping 60 lines ...
deployment.apps/nginx-deployment env updated
I1124 11:06:08.003249   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-bdb88cf5c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-bdb88cf5c-bcd59"
I1124 11:06:08.036758   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-549f5ff8c8 to 0 from 1"
deployment.apps/nginx-deployment env updated
I1124 11:06:08.134591   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-5dfd588ccc to 1 from 0"
Successful
(Bmessage:error: standard input cannot be used for multiple arguments
has:standard input cannot be used for multiple arguments
deployment.apps "nginx-deployment" deleted
I1124 11:06:08.315473   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-549f5ff8c8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-549f5ff8c8-2mtcc"
configmap "test-set-env-config" deleted
E1124 11:06:08.340019   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-7647fc47c9" failed with replicasets.apps "nginx-deployment-7647fc47c9" not found
secret "test-set-env-secret" deleted
E1124 11:06:08.435406   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-7f4655b8db" failed with replicasets.apps "nginx-deployment-7f4655b8db" not found
apps.sh:474: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1124 11:06:08.535843   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-5dfd588ccc" failed with replicasets.apps "nginx-deployment-5dfd588ccc" not found
deployment.apps/nginx-deployment created
I1124 11:06:08.701596   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7f4655b8db to 3"
E1124 11:06:08.743899   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-bdb88cf5c" failed with replicasets.apps "nginx-deployment-bdb88cf5c" not found
apps.sh:477: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(BE1124 11:06:08.785693   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-549f5ff8c8" failed with replicasets.apps "nginx-deployment-549f5ff8c8" not found
E1124 11:06:08.836265   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-7c9c467559" failed with replicasets.apps "nginx-deployment-7c9c467559" not found
apps.sh:478: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(BI1124 11:06:08.895930   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-7f4655b8db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7f4655b8db-tglwg"
apps.sh:479: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(BW1124 11:06:08.934324   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:06:08.934359   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1124 11:06:08.993223   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-7f4655b8db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7f4655b8db-hsqz2"
deployment.apps/nginx-deployment image updated
I1124 11:06:09.011115   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-5dc5bd75c8 to 1"
I1124 11:06:09.051192   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-7f4655b8db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7f4655b8db-d46sj"
apps.sh:482: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(BI1124 11:06:09.145666   44295 event.go:294] "Event occurred" object="namespace-1669287952-3730/nginx-deployment-5dc5bd75c8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5dc5bd75c8-sph5c"
... skipping 187 lines ...
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
has:registry.k8s.io/perl
deployment.apps "nginx-deployment" deleted
+++ exit code: 0
E1124 11:06:09.504234   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-7f4655b8db" failed with replicasets.apps "nginx-deployment-7f4655b8db" not found
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
E1124 11:06:09.547539   44295 replica_set.go:544] sync "namespace-1669287952-3730/nginx-deployment-5dc5bd75c8" failed with replicasets.apps "nginx-deployment-5dc5bd75c8" not found
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [1124 11:06:09] Creating namespace namespace-1669287969-19405
namespace/namespace-1669287969-19405 created
Context "test" modified.
+++ [1124 11:06:09] Testing kubectl(v1:replicasets)
apps.sh:645: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
+++ [1124 11:06:09] Deleting rs
I1124 11:06:09.962863   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-kq5sz"
I1124 11:06:09.980932   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-g6mb6"
I1124 11:06:09.980964   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-8zxd2"
E1124 11:06:10.029080   44295 replica_set.go:544] sync "namespace-1669287969-19405/frontend" failed with replicasets.apps "frontend" not found
replicaset.apps "frontend" deleted
E1124 11:06:10.085904   44295 replica_set.go:544] sync "namespace-1669287969-19405/frontend" failed with replicasets.apps "frontend" not found
apps.sh:651: Successful get pods -l tier=frontend {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:655: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I1124 11:06:10.370715   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-dt5mw"
I1124 11:06:10.388518   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-4ckkf"
I1124 11:06:10.388552   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-c2njf"
apps.sh:659: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [1124 11:06:10] Deleting rs
replicaset.apps "frontend" deleted
E1124 11:06:10.585935   44295 replica_set.go:544] sync "namespace-1669287969-19405/frontend" failed with replicasets.apps "frontend" not found
apps.sh:663: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:665: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-4ckkf" deleted
pod "frontend-c2njf" deleted
pod "frontend-dt5mw" deleted
apps.sh:668: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1669287969-19405
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 165 lines ...
I1124 11:06:14.316121   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/scale-3-744697cdb5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-744697cdb5-svghf"
I1124 11:06:14.330105   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/scale-3-744697cdb5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-744697cdb5-zqjhf"
apps.sh:730: Successful get deploy scale-1 {{.spec.replicas}}: 3
(Bapps.sh:731: Successful get deploy scale-2 {{.spec.replicas}}: 3
(Bapps.sh:732: Successful get deploy scale-3 {{.spec.replicas}}: 3
(Breplicaset.apps "frontend" deleted
W1124 11:06:14.627851   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:06:14.627886   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "scale-1" deleted
deployment.apps "scale-2" deleted
deployment.apps "scale-3" deleted
E1124 11:06:14.765984   44295 replica_set.go:544] sync "namespace-1669287969-19405/scale-1-744697cdb5" failed with replicasets.apps "scale-1-744697cdb5" not found
E1124 11:06:14.836200   44295 replica_set.go:544] sync "namespace-1669287969-19405/scale-3-744697cdb5" failed with replicasets.apps "scale-3-744697cdb5" not found
E1124 11:06:14.893546   44295 replica_set.go:544] sync "namespace-1669287969-19405/scale-2-744697cdb5" failed with replicasets.apps "scale-2-744697cdb5" not found
replicaset.apps/frontend created
I1124 11:06:14.946184   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-6gcld"
I1124 11:06:15.009670   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-l69x6"
apps.sh:740: Successful get rs frontend {{.spec.replicas}}: 3
(BI1124 11:06:15.047089   44295 event.go:294] "Event occurred" object="namespace-1669287969-19405/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gh7pw"
I1124 11:06:15.107421   42319 alloc.go:327] "allocated clusterIPs" service="namespace-1669287969-19405/frontend" clusterIPs=map[IPv4:10.0.0.142]
... skipping 46 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:808: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(BSuccessful
(Bmessage:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 20 lines ...
(Bapps.sh:610: Successful get statefulset nginx {{.spec.replicas}}: 0
(Bapps.sh:611: Successful get statefulset nginx {{.status.observedGeneration}}: 1
(Bstatefulset.apps/nginx scaled
I1124 11:06:19.547148   44295 event.go:294] "Event occurred" object="namespace-1669287978-30989/nginx" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod nginx-0 in StatefulSet nginx successful"
apps.sh:615: Successful get statefulset nginx {{.spec.replicas}}: 1
(Bapps.sh:616: Successful get statefulset nginx {{.status.observedGeneration}}: 2
(BW1124 11:06:19.762526   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:06:19.762564   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx restarted
apps.sh:624: Successful get statefulset nginx {{.status.observedGeneration}}: 3
(Bstatefulset.apps "nginx" deleted
I1124 11:06:19.973356   44295 stateful_set.go:449] StatefulSet has been deleted namespace-1669287978-30989/nginx
+++ exit code: 0
Recording: run_statefulset_history_tests
... skipping 233 lines ...
(Bmessage:statefulset.apps/nginx 
REVISION  CHANGE-CAUSE
2         kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
3         kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
has:3         kubectl apply
Successful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:570: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7:
(Bapps.sh:571: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:574: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8:
(Bapps.sh:575: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0:
... skipping 39 lines ...
service/list-service-test created
deployment.apps/list-deployment-test created
I1124 11:06:23.521255   44295 event.go:294] "Event occurred" object="namespace-1669287983-19569/list-deployment-test" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set list-deployment-test-59c8dc9888 to 1"
I1124 11:06:23.562065   44295 event.go:294] "Event occurred" object="namespace-1669287983-19569/list-deployment-test-59c8dc9888" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: list-deployment-test-59c8dc9888-bg897"
service "list-service-test" deleted
deployment.apps "list-deployment-test" deleted
E1124 11:06:23.632870   44295 replica_set.go:544] sync "namespace-1669287983-19569/list-deployment-test-59c8dc9888" failed with Operation cannot be fulfilled on replicasets.apps "list-deployment-test-59c8dc9888": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1669287983-19569/list-deployment-test-59c8dc9888, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 2e23327a-4a49-49e5-b41c-f19b96bb47cb, UID in object meta: 
+++ exit code: 0
Recording: run_multi_resources_tests
Running command: run_multi_resources_tests

+++ Running case: test-cmd.run_multi_resources_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 36 lines ...
Name:         mock
Namespace:    namespace-1669287983-24345
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1669287983-24345
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1669287983-24345
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 42 lines ...
Namespace:    namespace-1669287983-24345
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1669287983-24345
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 115 lines ...
+++ [1124 11:06:35] Creating namespace namespace-1669287995-8992
namespace/namespace-1669287995-8992 created
Context "test" modified.
+++ [1124 11:06:35] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E1124 11:06:35.720573   44295 pv_protection_controller.go:110] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
E1124 11:06:35.732644   44295 pv_protection_controller.go:110] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
E1124 11:06:36.152327   44295 pv_protection_controller.go:110] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
E1124 11:06:36.172771   44295 pv_protection_controller.go:110] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bquery for persistentvolumes had limit param
query for events had limit param
... skipping 4 lines ...
I1124 11:06:36.720562   74687 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 1 milliseconds
I1124 11:06:36.722784   74687 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds
I1124 11:06:36.733041   74687 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.kind%3DPersistentVolume%2CinvolvedObject.uid%3D40ac810e-3048-4e3e-aa3e-b3b3f96b9679%2CinvolvedObject.name%3Dpv0003%2CinvolvedObject.namespace%3D&limit=500 200 OK in 9 milliseconds
(Bpersistentvolume "pv0003" deleted
storage.sh:44: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E1124 11:06:37.254563   44295 pv_protection_controller.go:110] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:47: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:Warning: deleting cluster-scoped resources
Successful
... skipping 88 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 35 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 42 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Thu, 24 Nov 2022 11:01:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Thu, 24 Nov 2022 11:01:12 +0000   Thu, 24 Nov 2022 11:02:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 172 lines ...
yes
has:the server doesn't have a resource type
Successful
(Bmessage:yes
has:yes
Successful
(Bmessage:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
(BW1124 11:06:46.442699   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124 11:06:46.442739   44295 reflector.go:140] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:yes
0
has:0
Successful
(Bmessage:0
... skipping 60 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:870: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:871: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:872: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:873: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
(Bmessage:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 24 lines ...
discovery.sh:91: Successful get all -l app=cassandra {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(BI1124 11:06:48.818986   44295 event.go:294] "Event occurred" object="namespace-1669288008-25186/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-vstdh"
pod "cassandra-5rtsg" deleted
I1124 11:06:48.883792   44295 event.go:294] "Event occurred" object="namespace-1669288008-25186/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-96dm2"
pod "cassandra-fkgpj" deleted
replicationcontroller "cassandra" deleted
E1124 11:06:48.914084   44295 replica_set.go:544] sync "namespace-1669288008-25186/cassandra" failed with replicationcontrollers "cassandra" not found
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 257 lines ...
has:includeObject=Object
get.sh:329: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:333: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW1124 11:06:57.909583   44295 reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1124