Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 5h15m |
Revision | main |
... skipping 59 lines ... Fri, 04 Nov 2022 00:50:10 +0000: running gmsa setup Fri, 04 Nov 2022 00:50:10 +0000: setting up domain vm in gmsa-dc-27940 with keyvault capz-ci-gmsa make: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' GOBIN=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin ./scripts/go_install.sh github.com/drone/envsubst/v2/cmd/envsubst envsubst v2.0.0-20210730161058-179042472c46 go: downloading github.com/drone/envsubst/v2 v2.0.0-20210730161058-179042472c46 make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' WARNING: Failed to query a3dadaa5-8e1b-459e-abb2-f4b9241bf73a by invoking Graph API. If you don't have permission to query Graph API, please specify --assignee-object-id and --assignee-principal-type. WARNING: Assuming a3dadaa5-8e1b-459e-abb2-f4b9241bf73a as an object ID. Pre-reqs are met for creating Domain vm { "id": "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/gmsa-dc-27940", "location": "uksouth", "managedBy": null, ... skipping 3 lines ... }, "tags": { "creationTimestamp": "2022-11-04T00:50:23Z" }, "type": "Microsoft.Resources/resourceGroups" } ERROR: (ResourceNotFound) The Resource 'Microsoft.Compute/virtualMachines/dc-27940' under resource group 'gmsa-dc-27940' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Code: ResourceNotFound Message: The Resource 'Microsoft.Compute/virtualMachines/dc-27940' under resource group 'gmsa-dc-27940' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Creating Domain vm WARNING: It is recommended to use parameter "--public-ip-sku Standard" to create new VM with Standard public IP. Please note that the default public IP used for VM creation will be changed from Basic to Standard in the future. { "fqdns": "", ... skipping 13 lines ... "privateIpAddress": "172.16.0.4", "publicIpAddress": "", "resourceGroup": "gmsa-dc-27940", "zones": "" } WARNING: Command group 'network bastion' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus ERROR: (ResourceNotFound) The Resource 'Microsoft.Network/bastionHosts/gmsa-bastion' under resource group 'gmsa-dc-27940' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Code: ResourceNotFound Message: The Resource 'Microsoft.Network/bastionHosts/gmsa-bastion' under resource group 'gmsa-dc-27940' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Fri, 04 Nov 2022 00:52:19 +0000: starting to create cluster WARNING: The installed extension 'capi' is in preview. Using ./capz/templates/gmsa.yaml WARNING: Command group 'capi' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus ... skipping 5 lines ... WARNING: Merged "capi-manager" as current context in /root/.kube/config WARNING: ✓ Obtained AKS credentials WARNING: ✓ Created Cluster Identity Secret WARNING: ✓ Initialized management cluster WARNING: ✓ Generated workload cluster configuration at "capz-conf-f5ura0.yaml" WARNING: ✓ Created workload cluster "capz-conf-f5ura0" Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found Error: "capz-conf-f5ura0-kubeconfig" not found in namespace "default": secrets "capz-conf-f5ura0-kubeconfig" not found WARNING: ✓ Workload cluster is accessible WARNING: ✓ Workload access configuration written to "capz-conf-f5ura0.kubeconfig" WARNING: ✓ Deployed CNI to workload cluster WARNING: ✓ Deployed Windows Calico support to workload cluster WARNING: ✓ Deployed Windows kube-proxy support to workload cluster WARNING: ✓ Workload cluster is ready ... skipping 1650 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-ecfc1e66-da4c-44dd-b036-f3eb2fdbf934 11/04/22 01:34:07.477 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 01:34:07.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 01:34:07.692290 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 230 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-ecfc1e66-da4c-44dd-b036-f3eb2fdbf934 11/04/22 01:34:07.477 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 01:34:07.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 01:34:07.692290 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 452 lines ... Nov 4 01:49:32.098: INFO: waiting for 3 replicas (current: 1) Nov 4 01:49:45.099: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 01:49:45.099: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3867/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 4 01:49:52.100: INFO: waiting for 3 replicas (current: 1) Nov 4 01:50:12.099: INFO: waiting for 3 replicas (current: 1) Nov 4 01:50:12.201: INFO: waiting for 3 replicas (current: 1) Nov 4 01:50:12.201: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 01:50:12.201: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00215be68, {0x749d31b?, 0xc002be2360?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 112 lines ... Nov 4 01:50:29.652: INFO: Latency metrics for node capz-conf-ptz2f [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-3867" for this suite. 11/04/22 01:50:29.653 ------------------------------ • [FAILED] [944.580 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:153 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:154 ... skipping 147 lines ... Nov 4 01:49:32.098: INFO: waiting for 3 replicas (current: 1) Nov 4 01:49:45.099: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 01:49:45.099: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3867/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 4 01:49:52.100: INFO: waiting for 3 replicas (current: 1) Nov 4 01:50:12.099: INFO: waiting for 3 replicas (current: 1) Nov 4 01:50:12.201: INFO: waiting for 3 replicas (current: 1) Nov 4 01:50:12.201: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 01:50:12.201: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00215be68, {0x749d31b?, 0xc002be2360?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 841 lines ... STEP: Destroying namespace "gc-5315" for this suite. 11/04/22 02:20:10.102 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/04/22 02:20:10.208 Nov 4 02:20:10.209: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig I1104 02:20:10.210172 14081 discovery.go:214] Invalidating discovery information ... skipping 10 lines ... I1104 02:20:10.725060 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 02:20:10.927724 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 STEP: Creating a simple DaemonSet "daemon-set" 11/04/22 02:20:11.352 STEP: Check that daemon pods launch on every node of the cluster. 11/04/22 02:20:11.457 Nov 4 02:20:11.569: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:11.673: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 4 02:20:11.673: INFO: Node capz-conf-jm2t7 is running 0 daemon pod, expected 1 ... skipping 21 lines ... Nov 4 02:20:19.783: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:19.888: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:19.888: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 Nov 4 02:20:20.783: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:20.888: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 4 02:20:20.888: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 11/04/22 02:20:20.991 Nov 4 02:20:21.313: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:21.417: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:21.418: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 Nov 4 02:20:22.528: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:22.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:22.632: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 ... skipping 12 lines ... Nov 4 02:20:27.530: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:27.635: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:27.635: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 Nov 4 02:20:28.528: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:28.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 4 02:20:28.632: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 11/04/22 02:20:28.632 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 STEP: Deleting DaemonSet "daemon-set" 11/04/22 02:20:28.837 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2697, will wait for the garbage collector to delete the pods 11/04/22 02:20:28.837 I1104 02:20:28.940235 14081 reflector.go:221] Starting reflector *v1.Pod (0s) from test/utils/pod_store.go:57 I1104 02:20:28.940273 14081 reflector.go:257] Listing and watching *v1.Pod from test/utils/pod_store.go:57 ... skipping 19 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-2697" for this suite. 11/04/22 02:20:34.933 ------------------------------ • [SLOW TEST] [24.832 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/04/22 02:20:10.208 ... skipping 12 lines ... I1104 02:20:10.725060 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 02:20:10.927724 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 STEP: Creating a simple DaemonSet "daemon-set" 11/04/22 02:20:11.352 STEP: Check that daemon pods launch on every node of the cluster. 11/04/22 02:20:11.457 Nov 4 02:20:11.569: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:11.673: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 4 02:20:11.673: INFO: Node capz-conf-jm2t7 is running 0 daemon pod, expected 1 ... skipping 21 lines ... Nov 4 02:20:19.783: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:19.888: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:19.888: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 Nov 4 02:20:20.783: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:20.888: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 4 02:20:20.888: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 11/04/22 02:20:20.991 Nov 4 02:20:21.313: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:21.417: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:21.418: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 Nov 4 02:20:22.528: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:22.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:22.632: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 ... skipping 12 lines ... Nov 4 02:20:27.530: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:27.635: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 4 02:20:27.635: INFO: Node capz-conf-ptz2f is running 0 daemon pod, expected 1 Nov 4 02:20:28.528: INFO: DaemonSet pods can't tolerate node capz-conf-f5ura0-control-plane-bt7tm with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 4 02:20:28.632: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 4 02:20:28.632: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 11/04/22 02:20:28.632 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 STEP: Deleting DaemonSet "daemon-set" 11/04/22 02:20:28.837 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2697, will wait for the garbage collector to delete the pods 11/04/22 02:20:28.837 I1104 02:20:28.940235 14081 reflector.go:221] Starting reflector *v1.Pod (0s) from test/utils/pod_store.go:57 I1104 02:20:28.940273 14081 reflector.go:257] Listing and watching *v1.Pod from test/utils/pod_store.go:57 ... skipping 383 lines ... Nov 4 02:38:04.313: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:17.218: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 02:38:17.218: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5734/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 4 02:38:24.313: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:44.313: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:44.415: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:44.415: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 02:38:44.415: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc001671e68, {0x749d31b?, 0xc0035c0480?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 112 lines ... Nov 4 02:39:01.852: INFO: Latency metrics for node capz-conf-ptz2f [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-5734" for this suite. 11/04/22 02:39:01.853 ------------------------------ • [FAILED] [944.530 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:153 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:157 ... skipping 146 lines ... Nov 4 02:38:04.313: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:17.218: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 02:38:17.218: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5734/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 4 02:38:24.313: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:44.313: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:44.415: INFO: waiting for 3 replicas (current: 1) Nov 4 02:38:44.415: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 02:38:44.415: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc001671e68, {0x749d31b?, 0xc0035c0480?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 208 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-1a9bfd9d-e3a8-445a-afe3-54248f408a60 11/04/22 02:39:32.618 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 02:39:32.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 02:39:32.833284 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 82 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-1a9bfd9d-e3a8-445a-afe3-54248f408a60 11/04/22 02:39:32.618 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 02:39:32.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 02:39:32.833284 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 645 lines ... STEP: verifying the node doesn't have the label node 11/04/22 02:43:50.587 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 02:43:50.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 02:43:50.801604 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 104 lines ... STEP: verifying the node doesn't have the label node 11/04/22 02:43:50.587 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 02:43:50.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 02:43:50.801604 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 778 lines ... Nov 4 03:05:49.050: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:02.965: INFO: RC rc: sending request to consume 250 millicores Nov 4 03:06:02.965: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3521/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 03:06:09.045: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:29.046: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:29.148: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:29.148: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 03:06:29.148: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002bcbe68, {0x7470e12?, 0xc002e968a0?}, {{0x0, 0x0}, {0x7470e5c, 0x2}, {0x74c0413, 0x15}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x7470e12?, 0x61a0885?}, {{0x0, 0x0}, {0x7470e5c, 0x2}, {0x74c0413, 0x15}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 119 lines ... Nov 4 03:06:46.474: INFO: Latency metrics for node capz-conf-ptz2f [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-3521" for this suite. 11/04/22 03:06:46.474 ------------------------------ • [FAILED] [944.415 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] ReplicationController test/e2e/autoscaling/horizontal_pod_autoscaling.go:79 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:80 ... skipping 146 lines ... Nov 4 03:05:49.050: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:02.965: INFO: RC rc: sending request to consume 250 millicores Nov 4 03:06:02.965: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3521/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 03:06:09.045: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:29.046: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:29.148: INFO: waiting for 3 replicas (current: 2) Nov 4 03:06:29.148: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 03:06:29.148: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002bcbe68, {0x7470e12?, 0xc002e968a0?}, {{0x0, 0x0}, {0x7470e5c, 0x2}, {0x74c0413, 0x15}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x7470e12?, 0x61a0885?}, {{0x0, 0x0}, {0x7470e5c, 0x2}, {0x74c0413, 0x15}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 288 lines ... Nov 4 03:21:33.511: INFO: waiting for 3 replicas (current: 1) Nov 4 03:21:46.538: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 03:21:46.538: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6546/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 4 03:21:53.510: INFO: waiting for 3 replicas (current: 1) Nov 4 03:22:13.508: INFO: waiting for 3 replicas (current: 1) Nov 4 03:22:13.610: INFO: waiting for 3 replicas (current: 1) Nov 4 03:22:13.610: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 03:22:13.610: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAContainerResourceScaleTest).run(0xc002bcde58, {0x749d31b?, 0xc003bcad20?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:322 +0x34c k8s.io/kubernetes/test/e2e/autoscaling.scaleUpContainerResource({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:361 +0x219 ... skipping 112 lines ... Nov 4 03:22:31.134: INFO: Latency metrics for node capz-conf-ptz2f [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-6546" for this suite. 11/04/22 03:22:31.135 ------------------------------ • [FAILED] [944.652 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Container Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:162 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:166 ... skipping 148 lines ... Nov 4 03:21:33.511: INFO: waiting for 3 replicas (current: 1) Nov 4 03:21:46.538: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 03:21:46.538: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6546/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 4 03:21:53.510: INFO: waiting for 3 replicas (current: 1) Nov 4 03:22:13.508: INFO: waiting for 3 replicas (current: 1) Nov 4 03:22:13.610: INFO: waiting for 3 replicas (current: 1) Nov 4 03:22:13.610: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 03:22:13.610: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAContainerResourceScaleTest).run(0xc002bcde58, {0x749d31b?, 0xc003bcad20?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:322 +0x34c k8s.io/kubernetes/test/e2e/autoscaling.scaleUpContainerResource({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:361 +0x219 ... skipping 422 lines ... Nov 4 03:25:33.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 4 03:25:33.770: INFO: stderr: "" Nov 4 03:25:33.770: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 4 03:25:33.770: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 4 03:25:33.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-2503-webhook' Nov 4 03:25:34.297: INFO: stderr: "" Nov 4 03:25:34.297: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-2503-webhook\" deleted\n" Nov 4 03:25:34.297: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-2503-webhook" deleted error:%!s(<nil>) Nov 4 03:25:34.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig --namespace=gmsa-full-test-windows-2503 exec --namespace=gmsa-full-test-windows-2503 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 4 03:25:40.679: INFO: stderr: "" Nov 4 03:25:40.679: INFO: stdout: "namespace \"gmsa-full-test-windows-2503-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-2503-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 4 03:25:40.679: INFO: stdout:namespace "gmsa-full-test-windows-2503-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-2503-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 4 03:25:40.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 152 lines ... Nov 4 03:25:33.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 4 03:25:33.770: INFO: stderr: "" Nov 4 03:25:33.770: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 4 03:25:33.770: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 4 03:25:33.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-2503-webhook' Nov 4 03:25:34.297: INFO: stderr: "" Nov 4 03:25:34.297: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-2503-webhook\" deleted\n" Nov 4 03:25:34.297: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-2503-webhook" deleted error:%!s(<nil>) Nov 4 03:25:34.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig --namespace=gmsa-full-test-windows-2503 exec --namespace=gmsa-full-test-windows-2503 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 4 03:25:40.679: INFO: stderr: "" Nov 4 03:25:40.679: INFO: stdout: "namespace \"gmsa-full-test-windows-2503-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-2503-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 4 03:25:40.679: INFO: stdout:namespace "gmsa-full-test-windows-2503-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-2503-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2503-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 4 03:25:40.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 316 lines ... Nov 4 03:41:56.767: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:10.798: INFO: RC test-deployment: sending request to consume 250 millicores Nov 4 03:42:10.798: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-954/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 03:42:16.768: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:36.765: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:36.867: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:36.867: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 03:42:36.867: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc005353e68, {0x749d31b?, 0xc0033538c0?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 43 lines ... Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:51 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-56dc5cfbdd to 2 from 1 Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:51 +0000 UTC - event for test-deployment-56dc5cfbdd: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-56dc5cfbdd-8skcr Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:51 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-954/test-deployment-56dc5cfbdd-8skcr to capz-conf-ptz2f Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:55 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Created: Created container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:55 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.10" already present on machine Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:58 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Started: Started container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:47 +0000 UTC - event for test-deployment: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint horizontal-pod-autoscaling-954/test-deployment: Operation cannot be fulfilled on endpoints "test-deployment": the object has been modified; please apply your changes to the latest version and try again Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:47 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Killing: Stopping container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:47 +0000 UTC - event for test-deployment-56dc5cfbdd-mqtww: {kubelet capz-conf-jm2t7} Killing: Stopping container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:49 +0000 UTC - event for test-deployment-ctrl-qsxlz: {kubelet capz-conf-ptz2f} Killing: Stopping container test-deployment-ctrl Nov 4 03:42:52.166: INFO: POD NODE PHASE GRACE CONDITIONS Nov 4 03:42:52.166: INFO: Nov 4 03:42:52.274: INFO: ... skipping 66 lines ... Nov 4 03:42:54.485: INFO: Latency metrics for node capz-conf-ptz2f [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-954" for this suite. 11/04/22 03:42:54.485 ------------------------------ • [FAILED] [944.748 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:48 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:55 ... skipping 148 lines ... Nov 4 03:41:56.767: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:10.798: INFO: RC test-deployment: sending request to consume 250 millicores Nov 4 03:42:10.798: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-954/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 03:42:16.768: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:36.765: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:36.867: INFO: waiting for 3 replicas (current: 2) Nov 4 03:42:36.867: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 03:42:36.867: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc005353e68, {0x749d31b?, 0xc0033538c0?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 43 lines ... Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:51 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-56dc5cfbdd to 2 from 1 Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:51 +0000 UTC - event for test-deployment-56dc5cfbdd: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-56dc5cfbdd-8skcr Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:51 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-954/test-deployment-56dc5cfbdd-8skcr to capz-conf-ptz2f Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:55 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Created: Created container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:55 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.10" already present on machine Nov 4 03:42:52.064: INFO: At 2022-11-04 03:27:58 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Started: Started container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:47 +0000 UTC - event for test-deployment: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint horizontal-pod-autoscaling-954/test-deployment: Operation cannot be fulfilled on endpoints "test-deployment": the object has been modified; please apply your changes to the latest version and try again Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:47 +0000 UTC - event for test-deployment-56dc5cfbdd-8skcr: {kubelet capz-conf-ptz2f} Killing: Stopping container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:47 +0000 UTC - event for test-deployment-56dc5cfbdd-mqtww: {kubelet capz-conf-jm2t7} Killing: Stopping container test-deployment Nov 4 03:42:52.064: INFO: At 2022-11-04 03:42:49 +0000 UTC - event for test-deployment-ctrl-qsxlz: {kubelet capz-conf-ptz2f} Killing: Stopping container test-deployment-ctrl Nov 4 03:42:52.166: INFO: POD NODE PHASE GRACE CONDITIONS Nov 4 03:42:52.166: INFO: Nov 4 03:42:52.274: INFO: ... skipping 81 lines ... k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.3() test/e2e/autoscaling/horizontal_pod_autoscaling.go:56 +0x88 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/04/22 03:42:54.604 Nov 4 03:42:54.604: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig I1104 03:42:54.605271 14081 discovery.go:214] Invalidating discovery information ... skipping 8 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/04/22 03:42:55.117 I1104 03:42:55.117556 14081 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 03:42:55.117606 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 03:42:55.320291 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 4 03:42:55.430: INFO: Waiting up to 2m0s for pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" in namespace "var-expansion-1284" to be "container 0 failed with reason CreateContainerConfigError" Nov 4 03:42:55.532: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985": Phase="Pending", Reason="", readiness=false. Elapsed: 102.717912ms Nov 4 03:42:57.636: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206040817s Nov 4 03:42:59.636: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206592154s Nov 4 03:42:59.636: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 4 03:42:59.636: INFO: Deleting pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" in namespace "var-expansion-1284" Nov 4 03:42:59.745: INFO: Wait up to 5m0s for pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 4 03:43:03.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 4 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-1284" for this suite. 11/04/22 03:43:04.061 ------------------------------ • [SLOW TEST] [9.566 seconds] [sig-node] Variable Expansion test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/04/22 03:42:54.604 ... skipping 10 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/04/22 03:42:55.117 I1104 03:42:55.117556 14081 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 03:42:55.117606 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 03:42:55.320291 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 4 03:42:55.430: INFO: Waiting up to 2m0s for pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" in namespace "var-expansion-1284" to be "container 0 failed with reason CreateContainerConfigError" Nov 4 03:42:55.532: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985": Phase="Pending", Reason="", readiness=false. Elapsed: 102.717912ms Nov 4 03:42:57.636: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206040817s Nov 4 03:42:59.636: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206592154s Nov 4 03:42:59.636: INFO: Pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 4 03:42:59.636: INFO: Deleting pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" in namespace "var-expansion-1284" Nov 4 03:42:59.745: INFO: Wait up to 5m0s for pod "var-expansion-32235d84-7155-4dd4-b9d7-b470ebfc5985" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 4 03:43:03.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 2137 lines ... [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 4 04:04:13.441: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig Nov 4 04:04:16.099: INFO: created owner resource "ownerkgwgc" Nov 4 04:04:16.221: INFO: created dependent resource "dependentwn25d" Nov 4 04:04:16.443: INFO: created canary resource "canary5qjx9" I1104 04:04:27.789041 14081 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 4 04:04:27.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 31 lines ... [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 4 04:04:13.441: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig Nov 4 04:04:16.099: INFO: created owner resource "ownerkgwgc" Nov 4 04:04:16.221: INFO: created dependent resource "dependentwn25d" Nov 4 04:04:16.443: INFO: created canary resource "canary5qjx9" I1104 04:04:27.789041 14081 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 4 04:04:27.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 2145 lines ... Nov 4 04:24:30.980: INFO: waiting for 3 replicas (current: 2) Nov 4 04:24:45.045: INFO: RC test-deployment: sending request to consume 250 millicores Nov 4 04:24:45.045: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3730/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 04:24:50.981: INFO: waiting for 3 replicas (current: 2) Nov 4 04:25:10.981: INFO: waiting for 3 replicas (current: 2) Nov 4 04:25:11.083: INFO: waiting for 3 replicas (current: 2) Nov 4 04:25:11.083: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 04:25:11.083: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0052d3e68, {0x749d31b?, 0xc00373bd40?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 4 04:25:29.221: INFO: Latency metrics for node capz-conf-ptz2f [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-3730" for this suite. 11/04/22 04:25:29.221 ------------------------------ • [FAILED] [945.235 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:48 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 ... skipping 150 lines ... Nov 4 04:24:30.980: INFO: waiting for 3 replicas (current: 2) Nov 4 04:24:45.045: INFO: RC test-deployment: sending request to consume 250 millicores Nov 4 04:24:45.045: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3730/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 04:24:50.981: INFO: waiting for 3 replicas (current: 2) Nov 4 04:25:10.981: INFO: waiting for 3 replicas (current: 2) Nov 4 04:25:11.083: INFO: waiting for 3 replicas (current: 2) Nov 4 04:25:11.083: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 04:25:11.083: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0052d3e68, {0x749d31b?, 0xc00373bd40?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 1067 lines ... I1104 04:40:59.969042 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 04:41:00.170582 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/04/22 04:41:00.17 Nov 4 04:41:00.279: INFO: Waiting up to 2m0s for pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c" in namespace "var-expansion-6200" to be "running" Nov 4 04:41:00.381: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 101.999378ms Nov 4 04:41:02.484: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205493625s Nov 4 04:41:04.483: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204306086s Nov 4 04:41:06.483: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204573002s Nov 4 04:41:08.485: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205775432s ... skipping 107 lines ... I1104 04:40:59.969042 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 04:41:00.170582 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/04/22 04:41:00.17 Nov 4 04:41:00.279: INFO: Waiting up to 2m0s for pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c" in namespace "var-expansion-6200" to be "running" Nov 4 04:41:00.381: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 101.999378ms Nov 4 04:41:02.484: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205493625s Nov 4 04:41:04.483: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204306086s Nov 4 04:41:06.483: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204573002s Nov 4 04:41:08.485: INFO: Pod "var-expansion-2a42b5f1-4ed8-442c-ad48-5ee54e375b4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205775432s ... skipping 83 lines ... STEP: Destroying namespace "var-expansion-6200" for this suite. 11/04/22 04:43:19.93 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/04/22 04:43:20.043 ... skipping 12 lines ... I1104 04:43:20.556987 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 04:43:20.758452 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Nov 4 04:43:21.298: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 4 04:43:21.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 6 lines ... ------------------------------ • [1.467 seconds] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:27 attempt to deploy past allocatable memory limits test/e2e/windows/memory_limits.go:59 should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 14 lines ... I1104 04:43:20.556987 14081 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1104 04:43:20.758452 14081 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Nov 4 04:43:21.298: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 4 04:43:21.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 953 lines ... I1104 04:51:57.378363 14081 reflector.go:227] Stopping reflector *v1.Event (0s) from test/e2e/scheduling/events.go:98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 04:51:57.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 04:51:57.586575 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 61 lines ... I1104 04:51:57.378363 14081 reflector.go:227] Stopping reflector *v1.Event (0s) from test/e2e/scheduling/events.go:98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 4 04:51:57.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1104 04:51:57.586575 14081 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 1625 lines ... Nov 4 05:21:16.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 4 05:21:17.280: INFO: stderr: "" Nov 4 05:21:17.280: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 4 05:21:17.280: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 4 05:21:17.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-7599-webhook' Nov 4 05:21:17.808: INFO: stderr: "" Nov 4 05:21:17.808: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-7599-webhook\" deleted\n" Nov 4 05:21:17.808: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-7599-webhook" deleted error:%!s(<nil>) Nov 4 05:21:17.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig --namespace=gmsa-full-test-windows-7599 exec --namespace=gmsa-full-test-windows-7599 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 4 05:21:24.184: INFO: stderr: "" Nov 4 05:21:24.184: INFO: stdout: "namespace \"gmsa-full-test-windows-7599-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-7599-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 4 05:21:24.184: INFO: stdout:namespace "gmsa-full-test-windows-7599-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-7599-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 4 05:21:24.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 151 lines ... Nov 4 05:21:16.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 4 05:21:17.280: INFO: stderr: "" Nov 4 05:21:17.280: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 4 05:21:17.280: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 4 05:21:17.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-7599-webhook' Nov 4 05:21:17.808: INFO: stderr: "" Nov 4 05:21:17.808: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-7599-webhook\" deleted\n" Nov 4 05:21:17.808: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-7599-webhook" deleted error:%!s(<nil>) Nov 4 05:21:17.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig --namespace=gmsa-full-test-windows-7599 exec --namespace=gmsa-full-test-windows-7599 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 4 05:21:24.184: INFO: stderr: "" Nov 4 05:21:24.184: INFO: stdout: "namespace \"gmsa-full-test-windows-7599-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-7599-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 4 05:21:24.184: INFO: stdout:namespace "gmsa-full-test-windows-7599-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-7599-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-7599-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 4 05:21:24.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 650 lines ... test/e2e/apimachinery/garbage_collector.go:1040 Nov 4 05:26:18.762: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig Nov 4 05:26:21.413: INFO: created owner resource "ownercz5sf" Nov 4 05:26:21.519: INFO: created dependent resource "dependentsl6gh" STEP: wait for the owner to be deleted 11/04/22 05:26:21.625 STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd 11/04/22 05:26:26.728 I1104 05:26:57.147709 14081 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 4 05:26:57.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 32 lines ... test/e2e/apimachinery/garbage_collector.go:1040 Nov 4 05:26:18.762: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig Nov 4 05:26:21.413: INFO: created owner resource "ownercz5sf" Nov 4 05:26:21.519: INFO: created dependent resource "dependentsl6gh" STEP: wait for the owner to be deleted 11/04/22 05:26:21.625 STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd 11/04/22 05:26:26.728 I1104 05:26:57.147709 14081 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 4 05:26:57.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 926 lines ... Nov 4 05:48:53.544: INFO: RC rs: sending request to consume 250 millicores Nov 4 05:48:53.544: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-571/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 05:49:01.741: INFO: waiting for 3 replicas (current: 2) Nov 4 05:49:21.738: INFO: waiting for 3 replicas (current: 2) Nov 4 05:49:23.661: INFO: RC rs: sending request to consume 250 millicores Nov 4 05:49:23.661: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-571/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 5h0m0s timeout","severity":"error","time":"2022-11-04T05:49:37Z"} ++ early_exit_handler ++ '[' -n 159 ']' ++ kill -TERM 159 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 136 lines ... Nov 4 05:54:01.738: INFO: waiting for 3 replicas (current: 2) Nov 4 05:54:14.144: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-jm2t7 to /logs/artifacts/clusters/capz-conf-f5ura0/machines/capz-conf-f5ura0-md-win-5cdfcdfd99-g72q2/crashdumps.tar Nov 4 05:54:16.581: INFO: Collecting boot logs for AzureMachine capz-conf-f5ura0-md-win-jm2t7 Nov 4 05:54:21.738: INFO: waiting for 3 replicas (current: 2) Nov 4 05:54:21.840: INFO: waiting for 3 replicas (current: 2) Nov 4 05:54:21.840: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 05:54:21.840: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc003081e68, {0x7470e18?, 0xc004be4f60?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484c32, 0xa}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x7470e18?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484c32, 0xa}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 STEP: Removing consuming RC rs 11/04/22 05:54:21.946 Nov 4 05:54:21.946: INFO: RC rs: stopping metric consumer Nov 4 05:54:21.946: INFO: RC rs: stopping CPU consumer Nov 4 05:54:21.946: INFO: RC rs: stopping mem consumer Failed to get logs for machine capz-conf-f5ura0-md-win-5cdfcdfd99-g72q2, cluster default/capz-conf-f5ura0: getting a new sftp client connection: ssh: subsystem request failed Nov 4 05:54:22.911: INFO: Collecting logs for Windows node capz-conf-ptz2f in cluster capz-conf-f5ura0 in namespace default STEP: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-571, will wait for the garbage collector to delete the pods 11/04/22 05:54:31.949 I1104 05:54:32.052122 14081 reflector.go:221] Starting reflector *v1.Pod (0s) from test/utils/pod_store.go:57 I1104 05:54:32.052154 14081 reflector.go:257] Listing and watching *v1.Pod from test/utils/pod_store.go:57 Nov 4 05:54:32.313: INFO: Deleting ReplicaSet.apps rs took: 110.797777ms ... skipping 109 lines ... Nov 4 05:54:39.542: INFO: Latency metrics for node capz-conf-ptz2f [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-571" for this suite. 11/04/22 05:54:39.542 ------------------------------ • [FAILED] [944.789 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] ReplicaSet test/e2e/autoscaling/horizontal_pod_autoscaling.go:69 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 ... skipping 148 lines ... Nov 4 05:53:41.738: INFO: waiting for 3 replicas (current: 2) Nov 4 05:53:54.690: INFO: RC rs: sending request to consume 250 millicores Nov 4 05:53:54.690: INFO: ConsumeCPU URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-571/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 4 05:54:01.738: INFO: waiting for 3 replicas (current: 2) Nov 4 05:54:21.738: INFO: waiting for 3 replicas (current: 2) Nov 4 05:54:21.840: INFO: waiting for 3 replicas (current: 2) Nov 4 05:54:21.840: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 05:54:21.840: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc003081e68, {0x7470e18?, 0xc004be4f60?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484c32, 0xa}}, 0xc00083a0f0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x7470e18?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484c32, 0xa}}, {0x7471d78, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 197 lines ... Nov 4 05:56:36.590: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 05:56:36.590: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 4 05:56:46.567: INFO: waiting for 3 replicas (current: 1) Nov 4 05:56:57.832: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-ptz2f to /logs/artifacts/clusters/capz-conf-f5ura0/machines/capz-conf-f5ura0-md-win-5cdfcdfd99-hm5d2/crashdumps.tar Nov 4 05:57:00.284: INFO: Collecting boot logs for AzureMachine capz-conf-f5ura0-md-win-ptz2f Failed to get logs for machine capz-conf-f5ura0-md-win-5cdfcdfd99-hm5d2, cluster default/capz-conf-f5ura0: getting a new sftp client connection: ssh: subsystem request failed [1mSTEP[0m: Dumping workload cluster default/capz-conf-f5ura0 kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-5w2sp, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/metrics-server-76f7667fbf-fhp56 [1mSTEP[0m: failed to find events of Pod "metrics-server-76f7667fbf-fhp56" [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-qgskq [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-shrdv, container csi-proxy [1mSTEP[0m: failed to find events of Pod "containerd-logger-qgskq" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-648b57fd66-9xkm7, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-648b57fd66-zdg55, container coredns [1mSTEP[0m: Fetching kube-system pod logs took 1.630690797s [1mSTEP[0m: Dumping workload cluster default/capz-conf-f5ura0 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-shrdv [1mSTEP[0m: failed to find events of Pod "csi-proxy-shrdv" [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-w7jdj, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-648b57fd66-zdg55 [1mSTEP[0m: failed to find events of Pod "coredns-648b57fd66-zdg55" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-g8frn [1mSTEP[0m: failed to find events of Pod "kube-proxy-g8frn" [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-w7jdj [1mSTEP[0m: failed to find events of Pod "csi-proxy-w7jdj" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-conf-f5ura0-control-plane-bt7tm, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-tdwmd, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-lw8n5, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-648b57fd66-9xkm7 [1mSTEP[0m: failed to find events of Pod "coredns-648b57fd66-9xkm7" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-f5ura0-control-plane-bt7tm [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-tdwmd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-f5ura0-control-plane-bt7tm, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "kube-proxy-windows-tdwmd" [1mSTEP[0m: failed to find events of Pod "kube-apiserver-capz-conf-f5ura0-control-plane-bt7tm" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-hqnbb, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-f5ura0-control-plane-bt7tm, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-lw8n5 [1mSTEP[0m: failed to find events of Pod "calico-node-lw8n5" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-hqnbb [1mSTEP[0m: failed to find events of Pod "calico-node-windows-hqnbb" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-r8kqd, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-w4m7m, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-f5ura0-control-plane-bt7tm [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-capz-conf-f5ura0-control-plane-bt7tm" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-g8frn, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-f5ura0-control-plane-bt7tm [1mSTEP[0m: failed to find events of Pod "kube-scheduler-capz-conf-f5ura0-control-plane-bt7tm" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-w4m7m [1mSTEP[0m: failed to find events of Pod "kube-proxy-windows-w4m7m" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-f5ura0-control-plane-bt7tm, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-755ff8d7b5-5w2sp [1mSTEP[0m: failed to find events of Pod "calico-kube-controllers-755ff8d7b5-5w2sp" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-r8kqd [1mSTEP[0m: failed to find events of Pod "calico-node-windows-r8kqd" [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-6597b, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-hqnbb, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-6597b [1mSTEP[0m: failed to find events of Pod "containerd-logger-6597b" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-capz-conf-f5ura0-control-plane-bt7tm [1mSTEP[0m: failed to find events of Pod "etcd-capz-conf-f5ura0-control-plane-bt7tm" [1mSTEP[0m: Creating log watcher for controller kube-system/metrics-server-76f7667fbf-fhp56, container metrics-server [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-r8kqd, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-qgskq, container containerd-logger [1mSTEP[0m: Fetching activity logs took 2.315743943s ++ popd /home/prow/go/src/k8s.io/windows-testing ... skipping 98 lines ... I1104 05:57:49.214859 14081 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received I1104 05:57:50.215484 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment Nov 4 05:58:06.930: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 05:58:06.930: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } I1104 05:58:20.215960 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 05:58:20.215971 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 05:58:20.216: INFO: Unexpected error: <*rest.wrapPreviousError | 0xc001322040>: { currentErr: <*url.Error | 0xc0045f8000>{ Op: "Get", URL: "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment", Err: <*net.OpError | 0xc003c46000>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 4 lines ... }, Err: <*net.timeoutError | 0xac757e0>{}, }, }, previousError: <*errors.errorString | 0xc000118100>{s: "unexpected EOF"}, } Nov 4 05:58:20.216: FAIL: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment": dial tcp 20.90.240.15:6443: i/o timeout - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0) test/e2e/framework/autoscaling/autoscaling_utils.go:435 +0x375 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1() test/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a ... skipping 14 lines ... k8s.io/kubernetes/test/e2e/autoscaling.(*HPAContainerResourceScaleTest).run(0xc004425e58, {0x749d31b?, 0xc0041a1ec0?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:322 +0x34c k8s.io/kubernetes/test/e2e/autoscaling.scaleUpContainerResource({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:361 +0x219 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.2.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:164 +0x85 E1104 05:58:20.217070 14081 runtime.go:79] Observed a panic: framework.FailurePanic{Message:"Nov 4 05:58:20.216: Get \"https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment\": dial tcp 20.90.240.15:6443: i/o timeout - error from a previous attempt: unexpected EOF", Filename:"test/e2e/framework/autoscaling/autoscaling_utils.go", Line:435, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:435 +0x375\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1()\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26dc7d1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7e7cec8?, 0xc00012e000?}, 0x3?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7e7cec8, 0xc00012e000}, 0xc003c7e588, 0x2f6748a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7e7cec8, 0xc00012e000}, 0x90?, 0x2f66025?, 0x10?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7e7cec8, 0xc00012e000}, 0xc001da487c?, 0xc003ebdbe0?, 0x25c4967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x749d31b?, 0xf?, 0xc001da4870?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc003f183c0, 0x3, 0x6?)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:478 +0x7f\nk8s.io/kubernetes/test/e2e/autoscaling.(*HPAContainerResourceScaleTest).run(0xc004425e58, {0x749d31b?, 0xc0041a1ec0?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:322 +0x34c\nk8s.io/kubernetes/test/e2e/autoscaling.scaleUpContainerResource({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:361 +0x219\nk8s.io/kubernetes/test/e2e/autoscaling.glob..func7.2.1()\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:164 +0x85"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ... skipping 2 lines ... k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6fb0220?, 0xc004dfe180}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004dfe180?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6fb0220, 0xc004dfe180}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb2360, 0xc000dca700}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0003e0360, 0x109}, {0xc0044256c8?, 0xc0044256d8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.fail({0xc0003e0360, 0x109}, {0xc0044257a8?, 0x74709fa?, 0xc0044257c8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00050c200, 0xf4}, {0xc004425840?, 0xc00050c200?, 0xc004425868?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4a260, 0xc001322040}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0) ... skipping 45 lines ... I1104 06:00:24.221508 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 06:00:51.934: INFO: ConsumeMem failure: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem?durationSec=30&megabytes=250&requestSizeMegabytes=100": dial tcp 20.90.240.15:6443: i/o timeout Nov 4 06:00:51.934: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } I1104 06:00:55.224350 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 06:00:55.224424 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 06:01:21.937: INFO: ConsumeMem failure: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem?durationSec=30&megabytes=250&requestSizeMegabytes=100": dial tcp 20.90.240.15:6443: i/o timeout Nov 4 06:01:21.937: INFO: Unexpected error: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 06:01:21.937: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).sendConsumeMemRequest(0xc003f183c0, 0xfa) test/e2e/framework/autoscaling/autoscaling_utils.go:394 +0x107 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).makeConsumeMemRequests(0xc003f183c0) test/e2e/framework/autoscaling/autoscaling_utils.go:309 +0x1f7 created by k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer test/e2e/framework/autoscaling/autoscaling_utils.go:240 +0xb3d I1104 06:01:26.225183 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 06:01:26.225230 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true STEP: deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-3095, will wait for the garbage collector to delete the pods 11/04/22 06:01:31.938 I1104 06:01:57.226021 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true I1104 06:01:57.226231 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true Nov 4 06:02:01.941: INFO: Unexpected error: <*url.Error | 0xc0044b9c50>: { Op: "Get", URL: "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment", Err: <*net.OpError | 0xc00349a550>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 4 06:02:01.941: FAIL: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment": dial tcp 20.90.240.15:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).CleanUp(0xc003f183c0) test/e2e/framework/autoscaling/autoscaling_utils.go:546 +0x2a5 panic({0x6fb0220, 0xc004dfe180}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004dfe180?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7 panic({0x6fb0220, 0xc004dfe180}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb2360, 0xc000dca700}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail({0xc0003e0360, 0x109}, {0xc0044257a8?, 0x74709fa?, 0xc0044257c8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00050c200, 0xf4}, {0xc004425840?, 0xc00050c200?, 0xc004425868?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4a260, 0xc001322040}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0) ... skipping 30 lines ... [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/04/22 06:02:31.943 STEP: Collecting events from namespace "horizontal-pod-autoscaling-3095". 11/04/22 06:02:31.943 I1104 06:02:59.229961 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 10 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 06:02:59.230130 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 10 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 06:03:01.944: INFO: Unexpected error: failed to list events in namespace "horizontal-pod-autoscaling-3095": <*url.Error | 0xc0043804b0>: { Op: "Get", URL: "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/events", Err: <*net.OpError | 0xc0042426e0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 4 06:03:01.944: FAIL: failed to list events in namespace "horizontal-pod-autoscaling-3095": Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/events": dial tcp 20.90.240.15:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0016745c0, {0xc004effb40, 0x1f}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x7eb88e8, 0xc004e216c0}, {0xc004effb40, 0x1f}) test/e2e/framework/debug/dump.go:62 +0x8d ... skipping 9 lines ... /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-3095" for this suite. 11/04/22 06:03:01.945 I1104 06:03:29.230581 14081 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received I1104 06:03:29.230628 14081 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received Nov 4 06:03:31.946: FAIL: Couldn't delete ns: "horizontal-pod-autoscaling-3095": Delete "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095": dial tcp 20.90.240.15:6443: i/o timeout (&url.Error{Op:"Delete", URL:"https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095", Err:(*net.OpError)(0xc00187a780)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00083a1e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6508a20?, 0xc004244080?, 0xc004178f40?}, {0x7472644, 0x4}, {0xac75db8, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6508a20?, 0xc004244080?, 0x0?}, {0xac75db8?, 0xc002f73f68?, 0x2625699?}) /usr/local/go/src/reflect/value.go:368 +0xbc ------------------------------ • [FAILED] [532.290 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Container Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:162 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:163 ... skipping 70 lines ... I1104 05:57:49.214859 14081 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received I1104 05:57:50.215484 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment Nov 4 05:58:06.930: INFO: RC test-deployment: sending request to consume 250 MB Nov 4 05:58:06.930: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } I1104 05:58:20.215960 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 05:58:20.215971 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 05:58:20.216: INFO: Unexpected error: <*rest.wrapPreviousError | 0xc001322040>: { currentErr: <*url.Error | 0xc0045f8000>{ Op: "Get", URL: "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment", Err: <*net.OpError | 0xc003c46000>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 4 lines ... }, Err: <*net.timeoutError | 0xac757e0>{}, }, }, previousError: <*errors.errorString | 0xc000118100>{s: "unexpected EOF"}, } Nov 4 05:58:20.216: FAIL: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment": dial tcp 20.90.240.15:6443: i/o timeout - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0) test/e2e/framework/autoscaling/autoscaling_utils.go:435 +0x375 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1() test/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a ... skipping 14 lines ... k8s.io/kubernetes/test/e2e/autoscaling.(*HPAContainerResourceScaleTest).run(0xc004425e58, {0x749d31b?, 0xc0041a1ec0?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:322 +0x34c k8s.io/kubernetes/test/e2e/autoscaling.scaleUpContainerResource({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:361 +0x219 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.2.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:164 +0x85 E1104 05:58:20.217070 14081 runtime.go:79] Observed a panic: framework.FailurePanic{Message:"Nov 4 05:58:20.216: Get \"https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment\": dial tcp 20.90.240.15:6443: i/o timeout - error from a previous attempt: unexpected EOF", Filename:"test/e2e/framework/autoscaling/autoscaling_utils.go", Line:435, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:435 +0x375\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1()\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26dc7d1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7e7cec8?, 0xc00012e000?}, 0x3?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7e7cec8, 0xc00012e000}, 0xc003c7e588, 0x2f6748a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7e7cec8, 0xc00012e000}, 0x90?, 0x2f66025?, 0x10?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7e7cec8, 0xc00012e000}, 0xc001da487c?, 0xc003ebdbe0?, 0x25c4967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x749d31b?, 0xf?, 0xc001da4870?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc003f183c0, 0x3, 0x6?)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:478 +0x7f\nk8s.io/kubernetes/test/e2e/autoscaling.(*HPAContainerResourceScaleTest).run(0xc004425e58, {0x749d31b?, 0xc0041a1ec0?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, 0xc00083a1e0)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:322 +0x34c\nk8s.io/kubernetes/test/e2e/autoscaling.scaleUpContainerResource({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:361 +0x219\nk8s.io/kubernetes/test/e2e/autoscaling.glob..func7.2.1()\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:164 +0x85"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ... skipping 2 lines ... k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6fb0220?, 0xc004dfe180}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004dfe180?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6fb0220, 0xc004dfe180}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb2360, 0xc000dca700}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0003e0360, 0x109}, {0xc0044256c8?, 0xc0044256d8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.fail({0xc0003e0360, 0x109}, {0xc0044257a8?, 0x74709fa?, 0xc0044257c8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00050c200, 0xf4}, {0xc004425840?, 0xc00050c200?, 0xc004425868?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4a260, 0xc001322040}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0) ... skipping 45 lines ... I1104 06:00:24.221508 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 06:00:51.934: INFO: ConsumeMem failure: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem?durationSec=30&megabytes=250&requestSizeMegabytes=100": dial tcp 20.90.240.15:6443: i/o timeout Nov 4 06:00:51.934: INFO: ConsumeMem URL: {https capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } I1104 06:00:55.224350 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 06:00:55.224424 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 06:01:21.937: INFO: ConsumeMem failure: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/services/test-deployment-ctrl/proxy/ConsumeMem?durationSec=30&megabytes=250&requestSizeMegabytes=100": dial tcp 20.90.240.15:6443: i/o timeout Nov 4 06:01:21.937: INFO: Unexpected error: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 06:01:21.937: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).sendConsumeMemRequest(0xc003f183c0, 0xfa) test/e2e/framework/autoscaling/autoscaling_utils.go:394 +0x107 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).makeConsumeMemRequests(0xc003f183c0) test/e2e/framework/autoscaling/autoscaling_utils.go:309 +0x1f7 created by k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer test/e2e/framework/autoscaling/autoscaling_utils.go:240 +0xb3d I1104 06:01:26.225183 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 06:01:26.225230 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true STEP: deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-3095, will wait for the garbage collector to delete the pods 11/04/22 06:01:31.938 I1104 06:01:57.226021 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true I1104 06:01:57.226231 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true Nov 4 06:02:01.941: INFO: Unexpected error: <*url.Error | 0xc0044b9c50>: { Op: "Get", URL: "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment", Err: <*net.OpError | 0xc00349a550>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 4 06:02:01.941: FAIL: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment": dial tcp 20.90.240.15:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).CleanUp(0xc003f183c0) test/e2e/framework/autoscaling/autoscaling_utils.go:546 +0x2a5 panic({0x6fb0220, 0xc004dfe180}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004dfe180?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7 panic({0x6fb0220, 0xc004dfe180}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb2360, 0xc000dca700}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail({0xc0003e0360, 0x109}, {0xc0044257a8?, 0x74709fa?, 0xc0044257c8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00050c200, 0xf4}, {0xc004425840?, 0xc00050c200?, 0xc004425868?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4a260, 0xc001322040}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0) ... skipping 30 lines ... [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/04/22 06:02:31.943 STEP: Collecting events from namespace "horizontal-pod-autoscaling-3095". 11/04/22 06:02:31.943 I1104 06:02:59.229961 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 10 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=5m8s&timeoutSeconds=308&watch=true I1104 06:02:59.230130 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 10 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m43s&timeoutSeconds=523&watch=true Nov 4 06:03:01.944: INFO: Unexpected error: failed to list events in namespace "horizontal-pod-autoscaling-3095": <*url.Error | 0xc0043804b0>: { Op: "Get", URL: "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/events", Err: <*net.OpError | 0xc0042426e0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 4 06:03:01.944: FAIL: failed to list events in namespace "horizontal-pod-autoscaling-3095": Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/events": dial tcp 20.90.240.15:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0016745c0, {0xc004effb40, 0x1f}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x7eb88e8, 0xc004e216c0}, {0xc004effb40, 0x1f}) test/e2e/framework/debug/dump.go:62 +0x8d ... skipping 9 lines ... /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-3095" for this suite. 11/04/22 06:03:01.945 I1104 06:03:29.230581 14081 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received I1104 06:03:29.230628 14081 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received Nov 4 06:03:31.946: FAIL: Couldn't delete ns: "horizontal-pod-autoscaling-3095": Delete "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095": dial tcp 20.90.240.15:6443: i/o timeout (&url.Error{Op:"Delete", URL:"https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095", Err:(*net.OpError)(0xc00187a780)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00083a1e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6508a20?, 0xc004244080?, 0xc004178f40?}, {0x7472644, 0x4}, {0xac75db8, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6508a20?, 0xc004244080?, 0x0?}, {0xac75db8?, 0xc002f73f68?, 0x2625699?}) /usr/local/go/src/reflect/value.go:368 +0xbc << End Captured GinkgoWriter Output Nov 4 05:58:20.216: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-3095/deployments/test-deployment": dial tcp 20.90.240.15:6443: i/o timeout - error from a previous attempt: unexpected EOF In [It] at: test/e2e/framework/autoscaling/autoscaling_utils.go:435 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc003f183c0) test/e2e/framework/autoscaling/autoscaling_utils.go:435 +0x375 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1() ... skipping 17 lines ... k8s.io/kubernetes/test/e2e/autoscaling.scaleUpContainerResource({0x749d31b?, 0x61a0885?}, {{0x7472b24, 0x4}, {0x747bca8, 0x7}, {0x7484070, 0xa}}, {0x747782c, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:361 +0x219 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.2.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:164 +0x85 There were additional failures detected after the initial failure: [FAILED] Nov 4 06:03:01.944: failed to list events in namespace "horizontal-pod-autoscaling-3095": Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095/events": dial tcp 20.90.240.15:6443: i/o timeout In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0016745c0, {0xc004effb40, 0x1f}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x7eb88e8, 0xc004e216c0}, {0xc004effb40, 0x1f}) ... skipping 6 lines ... test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6508a20?, 0xc004244100?, 0xc00394ffb0?}, {0x7472644, 0x4}, {0xac75db8, 0x0, 0xc0008e2be8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6508a20?, 0xc004244100?, 0x28dda7c?}, {0xac75db8?, 0xc00394ff80?, 0xc00361a400?}) /usr/local/go/src/reflect/value.go:368 +0xbc ---------- [FAILED] Nov 4 06:03:31.946: Couldn't delete ns: "horizontal-pod-autoscaling-3095": Delete "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095": dial tcp 20.90.240.15:6443: i/o timeout (&url.Error{Op:"Delete", URL:"https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-3095", Err:(*net.OpError)(0xc00187a780)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370 Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00083a1e0) ... skipping 13 lines ... STEP: Creating a kubernetes client 11/04/22 06:03:31.949 Nov 4 06:03:31.949: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig I1104 06:03:31.950845 14081 discovery.go:214] Invalidating discovery information STEP: Building a namespace api object, basename horizontal-pod-autoscaling 11/04/22 06:03:31.95 I1104 06:04:00.236518 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=8m59s&timeoutSeconds=539&watch=true I1104 06:04:00.237005 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m38s&timeoutSeconds=518&watch=true Nov 4 06:04:01.952: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp 20.90.240.15:6443: i/o timeout I1104 06:04:31.238226 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m38s&timeoutSeconds=518&watch=true I1104 06:04:31.238845 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=8m59s&timeoutSeconds=539&watch=true E1104 06:04:31.288272 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m38s&timeoutSeconds=518&watch=true": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host - error from a previous attempt: dial tcp 20.90.240.15:6443: i/o timeout E1104 06:04:31.288401 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host - error from a previous attempt: dial tcp 20.90.240.15:6443: i/o timeout I1104 06:04:32.107172 14081 reflector.go:257] Listing and watching *v1.Pod from test/e2e/node/taints.go:147 W1104 06:04:32.152009 14081 reflector.go:424] test/e2e/node/taints.go:147: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E1104 06:04:32.152172 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host I1104 06:04:32.377272 14081 reflector.go:257] Listing and watching *v1.Pod from test/e2e/node/taints.go:147 W1104 06:04:32.407882 14081 reflector.go:424] test/e2e/node/taints.go:147: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E1104 06:04:32.407983 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Nov 4 06:04:33.954: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp 20.90.240.15:6443: i/o timeout Nov 4 06:04:34.010: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Nov 4 06:04:34.010: INFO: Unexpected error: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 06:04:34.010: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00083a0f0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 4 06:04:34.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/04/22 06:04:34.051 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 ------------------------------ • [FAILED] [62.102 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [BeforeEach] set up framework | framework.go:178 [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) test/e2e/autoscaling/horizontal_pod_autoscaling.go:119 Should not scale up on a busy sidecar with an idle application test/e2e/autoscaling/horizontal_pod_autoscaling.go:126 ... skipping 4 lines ... STEP: Creating a kubernetes client 11/04/22 06:03:31.949 Nov 4 06:03:31.949: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig I1104 06:03:31.950845 14081 discovery.go:214] Invalidating discovery information STEP: Building a namespace api object, basename horizontal-pod-autoscaling 11/04/22 06:03:31.95 I1104 06:04:00.236518 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=8m59s&timeoutSeconds=539&watch=true I1104 06:04:00.237005 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m38s&timeoutSeconds=518&watch=true Nov 4 06:04:01.952: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp 20.90.240.15:6443: i/o timeout I1104 06:04:31.238226 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m38s&timeoutSeconds=518&watch=true I1104 06:04:31.238845 14081 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=8m59s&timeoutSeconds=539&watch=true E1104 06:04:31.288272 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471&timeout=8m38s&timeoutSeconds=518&watch=true": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host - error from a previous attempt: dial tcp 20.90.240.15:6443: i/o timeout E1104 06:04:31.288401 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host - error from a previous attempt: dial tcp 20.90.240.15:6443: i/o timeout I1104 06:04:32.107172 14081 reflector.go:257] Listing and watching *v1.Pod from test/e2e/node/taints.go:147 W1104 06:04:32.152009 14081 reflector.go:424] test/e2e/node/taints.go:147: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E1104 06:04:32.152172 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host I1104 06:04:32.377272 14081 reflector.go:257] Listing and watching *v1.Pod from test/e2e/node/taints.go:147 W1104 06:04:32.407882 14081 reflector.go:424] test/e2e/node/taints.go:147: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E1104 06:04:32.407983 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Nov 4 06:04:33.954: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp 20.90.240.15:6443: i/o timeout Nov 4 06:04:34.010: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Nov 4 06:04:34.010: INFO: Unexpected error: <*errors.errorString | 0xc00020fca0>: { s: "timed out waiting for the condition", } Nov 4 06:04:34.010: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00083a0f0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 ... skipping 20 lines ... [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/04/22 06:04:34.052 Nov 4 06:04:34.052: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-f5ura0.kubeconfig I1104 06:04:34.053601 14081 discovery.go:214] Invalidating discovery information STEP: Building a namespace api object, basename controllerrevisions 11/04/22 06:04:34.053 Nov 4 06:04:34.072: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host I1104 06:04:34.680153 14081 reflector.go:257] Listing and watching *v1.Pod from test/e2e/node/taints.go:147 W1104 06:04:34.697970 14081 reflector.go:424] test/e2e/node/taints.go:147: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E1104 06:04:34.698192 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2542/pods?labelSelector=group%3Dtaint-eviction-b&resourceVersion=38486": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host I1104 06:04:35.429632 14081 reflector.go:257] Listing and watching *v1.Pod from test/e2e/node/taints.go:147 W1104 06:04:35.458446 14081 reflector.go:424] test/e2e/node/taints.go:147: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host E1104 06:04:35.458521 14081 reflector.go:140] test/e2e/node/taints.go:147: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-2962/pods?labelSelector=group%3Dtaint-eviction-4&resourceVersion=38471": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host Nov 4 06:04:36.108: INFO: Unexpected error while creating namespace: Post "https://capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp: lookup capz-conf-f5ura0-da14b741.uksouth.cloudapp.azure.com on 10.63.240.10:53: no such host {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-11-04T06:04:37Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-11-04T06:04:37Z"}