Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 5h15m |
Revision | main |
... skipping 59 lines ... Mon, 07 Nov 2022 00:51:13 +0000: running gmsa setup Mon, 07 Nov 2022 00:51:13 +0000: setting up domain vm in gmsa-dc-2790 with keyvault capz-ci-gmsa make: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' GOBIN=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin ./scripts/go_install.sh github.com/drone/envsubst/v2/cmd/envsubst envsubst v2.0.0-20210730161058-179042472c46 go: downloading github.com/drone/envsubst/v2 v2.0.0-20210730161058-179042472c46 make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' WARNING: Failed to query a3dadaa5-8e1b-459e-abb2-f4b9241bf73a by invoking Graph API. If you don't have permission to query Graph API, please specify --assignee-object-id and --assignee-principal-type. WARNING: Assuming a3dadaa5-8e1b-459e-abb2-f4b9241bf73a as an object ID. Pre-reqs are met for creating Domain vm { "id": "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/gmsa-dc-2790", "location": "westus3", "managedBy": null, ... skipping 3 lines ... }, "tags": { "creationTimestamp": "2022-11-07T00:51:25Z" }, "type": "Microsoft.Resources/resourceGroups" } ERROR: (ResourceNotFound) The Resource 'Microsoft.Compute/virtualMachines/dc-2790' under resource group 'gmsa-dc-2790' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Code: ResourceNotFound Message: The Resource 'Microsoft.Compute/virtualMachines/dc-2790' under resource group 'gmsa-dc-2790' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Creating Domain vm WARNING: It is recommended to use parameter "--public-ip-sku Standard" to create new VM with Standard public IP. Please note that the default public IP used for VM creation will be changed from Basic to Standard in the future. { "fqdns": "", ... skipping 13 lines ... "privateIpAddress": "172.16.0.4", "publicIpAddress": "", "resourceGroup": "gmsa-dc-2790", "zones": "" } WARNING: Command group 'network bastion' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus ERROR: (ResourceNotFound) The Resource 'Microsoft.Network/bastionHosts/gmsa-bastion' under resource group 'gmsa-dc-2790' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Code: ResourceNotFound Message: The Resource 'Microsoft.Network/bastionHosts/gmsa-bastion' under resource group 'gmsa-dc-2790' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Mon, 07 Nov 2022 01:01:54 +0000: starting to create cluster WARNING: The installed extension 'capi' is in preview. Using ./capz/templates/gmsa.yaml WARNING: Command group 'capi' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus ... skipping 5 lines ... WARNING: Merged "capi-manager" as current context in /root/.kube/config WARNING: ✓ Obtained AKS credentials WARNING: ✓ Created Cluster Identity Secret WARNING: ✓ Initialized management cluster WARNING: ✓ Generated workload cluster configuration at "capz-conf-ie6pqe.yaml" WARNING: ✓ Created workload cluster "capz-conf-ie6pqe" Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found Error: "capz-conf-ie6pqe-kubeconfig" not found in namespace "default": secrets "capz-conf-ie6pqe-kubeconfig" not found WARNING: ✓ Workload cluster is accessible WARNING: ✓ Workload access configuration written to "capz-conf-ie6pqe.kubeconfig" WARNING: ✓ Deployed CNI to workload cluster WARNING: ✓ Deployed Windows Calico support to workload cluster WARNING: ✓ Deployed Windows kube-proxy support to workload cluster WARNING: ✓ Workload cluster is ready ... skipping 2618 lines ... STEP: Destroying namespace "gc-6495" for this suite. 11/07/22 01:37:10.337 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/07/22 01:37:10.404 Nov 7 01:37:10.405: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig I1107 01:37:10.406503 15004 discovery.go:214] Invalidating discovery information ... skipping 10 lines ... I1107 01:37:10.720525 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 01:37:10.843573 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 STEP: Creating a simple DaemonSet "daemon-set" 11/07/22 01:37:11.104 STEP: Check that daemon pods launch on every node of the cluster. 11/07/22 01:37:11.174 Nov 7 01:37:11.244: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:11.314: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 7 01:37:11.314: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 ... skipping 21 lines ... Nov 7 01:37:19.380: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:19.445: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:19.445: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 Nov 7 01:37:20.381: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:20.444: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 7 01:37:20.444: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 11/07/22 01:37:20.507 Nov 7 01:37:20.718: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:20.782: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:20.782: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 Nov 7 01:37:21.849: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:21.912: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:21.912: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 ... skipping 9 lines ... Nov 7 01:37:25.849: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:25.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:25.913: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 Nov 7 01:37:26.849: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:26.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 7 01:37:26.913: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 11/07/22 01:37:26.913 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 STEP: Deleting DaemonSet "daemon-set" 11/07/22 01:37:27.038 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2202, will wait for the garbage collector to delete the pods 11/07/22 01:37:27.038 I1107 01:37:27.101311 15004 reflector.go:221] Starting reflector *v1.Pod (0s) from test/utils/pod_store.go:57 I1107 01:37:27.101367 15004 reflector.go:257] Listing and watching *v1.Pod from test/utils/pod_store.go:57 ... skipping 19 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-2202" for this suite. 11/07/22 01:37:33.22 ------------------------------ • [SLOW TEST] [22.882 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/07/22 01:37:10.404 ... skipping 12 lines ... I1107 01:37:10.720525 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 01:37:10.843573 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 STEP: Creating a simple DaemonSet "daemon-set" 11/07/22 01:37:11.104 STEP: Check that daemon pods launch on every node of the cluster. 11/07/22 01:37:11.174 Nov 7 01:37:11.244: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:11.314: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 7 01:37:11.314: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 ... skipping 21 lines ... Nov 7 01:37:19.380: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:19.445: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:19.445: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 Nov 7 01:37:20.381: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:20.444: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 7 01:37:20.444: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 11/07/22 01:37:20.507 Nov 7 01:37:20.718: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:20.782: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:20.782: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 Nov 7 01:37:21.849: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:21.912: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:21.912: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 ... skipping 9 lines ... Nov 7 01:37:25.849: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:25.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 7 01:37:25.913: INFO: Node capz-conf-hd8wg is running 0 daemon pod, expected 1 Nov 7 01:37:26.849: INFO: DaemonSet pods can't tolerate node capz-conf-ie6pqe-control-plane-8mddd with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 7 01:37:26.913: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 7 01:37:26.913: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 11/07/22 01:37:26.913 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 STEP: Deleting DaemonSet "daemon-set" 11/07/22 01:37:27.038 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2202, will wait for the garbage collector to delete the pods 11/07/22 01:37:27.038 I1107 01:37:27.101311 15004 reflector.go:221] Starting reflector *v1.Pod (0s) from test/utils/pod_store.go:57 I1107 01:37:27.101367 15004 reflector.go:257] Listing and watching *v1.Pod from test/utils/pod_store.go:57 ... skipping 128 lines ... [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 7 01:37:34.431: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig Nov 7 01:37:36.844: INFO: created owner resource "ownerxrkmp" Nov 7 01:37:36.910: INFO: created dependent resource "dependentmvhv7" Nov 7 01:37:37.042: INFO: created canary resource "canaryctkkt" I1107 01:37:42.449172 15004 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 7 01:37:42.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 31 lines ... [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 7 01:37:34.431: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig Nov 7 01:37:36.844: INFO: created owner resource "ownerxrkmp" Nov 7 01:37:36.910: INFO: created dependent resource "dependentmvhv7" Nov 7 01:37:37.042: INFO: created canary resource "canaryctkkt" I1107 01:37:42.449172 15004 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 7 01:37:42.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 1329 lines ... Nov 7 02:07:35.329: INFO: waiting for 3 replicas (current: 2) Nov 7 02:07:47.291: INFO: RC rs: sending request to consume 250 millicores Nov 7 02:07:47.292: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4013/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 02:07:55.328: INFO: waiting for 3 replicas (current: 2) Nov 7 02:08:15.324: INFO: waiting for 3 replicas (current: 2) Nov 7 02:08:15.387: INFO: waiting for 3 replicas (current: 2) Nov 7 02:08:15.387: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 02:08:15.387: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0021b1e68, {0x74748d6?, 0xc000564a20?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 119 lines ... Nov 7 02:08:32.115: INFO: Latency metrics for node capz-conf-n64xz [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-4013" for this suite. 11/07/22 02:08:32.115 ------------------------------ • [FAILED] [943.085 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] ReplicaSet test/e2e/autoscaling/horizontal_pod_autoscaling.go:69 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 ... skipping 147 lines ... Nov 7 02:07:35.329: INFO: waiting for 3 replicas (current: 2) Nov 7 02:07:47.291: INFO: RC rs: sending request to consume 250 millicores Nov 7 02:07:47.292: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4013/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 02:07:55.328: INFO: waiting for 3 replicas (current: 2) Nov 7 02:08:15.324: INFO: waiting for 3 replicas (current: 2) Nov 7 02:08:15.387: INFO: waiting for 3 replicas (current: 2) Nov 7 02:08:15.387: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 02:08:15.387: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0021b1e68, {0x74748d6?, 0xc000564a20?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 337 lines ... STEP: verifying the node doesn't have the label node 11/07/22 02:14:24.375 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 02:14:24.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 02:14:24.502945 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 115 lines ... STEP: verifying the node doesn't have the label node 11/07/22 02:14:24.375 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 02:14:24.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 02:14:24.502945 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 236 lines ... Nov 7 02:20:25.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 7 02:20:25.897: INFO: stderr: "" Nov 7 02:20:25.897: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 7 02:20:25.897: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 7 02:20:25.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-4687-webhook' Nov 7 02:20:26.227: INFO: stderr: "" Nov 7 02:20:26.227: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-4687-webhook\" deleted\n" Nov 7 02:20:26.227: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-4687-webhook" deleted error:%!s(<nil>) Nov 7 02:20:26.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig --namespace=gmsa-full-test-windows-4687 exec --namespace=gmsa-full-test-windows-4687 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 7 02:20:32.348: INFO: stderr: "" Nov 7 02:20:32.348: INFO: stdout: "namespace \"gmsa-full-test-windows-4687-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-4687-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 7 02:20:32.348: INFO: stdout:namespace "gmsa-full-test-windows-4687-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-4687-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 7 02:20:32.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 157 lines ... Nov 7 02:20:25.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 7 02:20:25.897: INFO: stderr: "" Nov 7 02:20:25.897: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 7 02:20:25.897: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 7 02:20:25.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-4687-webhook' Nov 7 02:20:26.227: INFO: stderr: "" Nov 7 02:20:26.227: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-4687-webhook\" deleted\n" Nov 7 02:20:26.227: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-4687-webhook" deleted error:%!s(<nil>) Nov 7 02:20:26.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig --namespace=gmsa-full-test-windows-4687 exec --namespace=gmsa-full-test-windows-4687 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 7 02:20:32.348: INFO: stderr: "" Nov 7 02:20:32.348: INFO: stdout: "namespace \"gmsa-full-test-windows-4687-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-4687-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 7 02:20:32.348: INFO: stdout:namespace "gmsa-full-test-windows-4687-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-4687-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-4687-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 7 02:20:32.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 449 lines ... Nov 7 02:38:34.087: INFO: waiting for 3 replicas (current: 2) Nov 7 02:38:46.004: INFO: RC rc: sending request to consume 250 millicores Nov 7 02:38:46.004: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5896/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 02:38:54.086: INFO: waiting for 3 replicas (current: 2) Nov 7 02:39:14.084: INFO: waiting for 3 replicas (current: 2) Nov 7 02:39:14.147: INFO: waiting for 3 replicas (current: 2) Nov 7 02:39:14.147: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 02:39:14.147: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00213fe68, {0x74748d0?, 0xc000565d40?}, {{0x0, 0x0}, {0x747491a, 0x2}, {0x74c3fbc, 0x15}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74748d0?, 0x61a2e85?}, {{0x0, 0x0}, {0x747491a, 0x2}, {0x74c3fbc, 0x15}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 119 lines ... Nov 7 02:39:29.979: INFO: Latency metrics for node capz-conf-n64xz [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-5896" for this suite. 11/07/22 02:39:29.979 ------------------------------ • [FAILED] [957.599 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] ReplicationController test/e2e/autoscaling/horizontal_pod_autoscaling.go:79 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability test/e2e/autoscaling/horizontal_pod_autoscaling.go:80 ... skipping 146 lines ... Nov 7 02:38:34.087: INFO: waiting for 3 replicas (current: 2) Nov 7 02:38:46.004: INFO: RC rc: sending request to consume 250 millicores Nov 7 02:38:46.004: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5896/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 02:38:54.086: INFO: waiting for 3 replicas (current: 2) Nov 7 02:39:14.084: INFO: waiting for 3 replicas (current: 2) Nov 7 02:39:14.147: INFO: waiting for 3 replicas (current: 2) Nov 7 02:39:14.147: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 02:39:14.147: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00213fe68, {0x74748d0?, 0xc000565d40?}, {{0x0, 0x0}, {0x747491a, 0x2}, {0x74c3fbc, 0x15}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74748d0?, 0x61a2e85?}, {{0x0, 0x0}, {0x747491a, 0x2}, {0x74c3fbc, 0x15}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 157 lines ... I1107 02:39:30.375626 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 02:39:30.498448 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/07/22 02:39:30.498 Nov 7 02:39:30.568: INFO: Waiting up to 2m0s for pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27" in namespace "var-expansion-296" to be "running" Nov 7 02:39:30.636: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 67.598898ms Nov 7 02:39:32.701: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133370898s Nov 7 02:39:34.699: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13070179s Nov 7 02:39:36.699: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130769763s Nov 7 02:39:38.699: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130625563s ... skipping 105 lines ... I1107 02:39:30.375626 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 02:39:30.498448 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/07/22 02:39:30.498 Nov 7 02:39:30.568: INFO: Waiting up to 2m0s for pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27" in namespace "var-expansion-296" to be "running" Nov 7 02:39:30.636: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 67.598898ms Nov 7 02:39:32.701: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133370898s Nov 7 02:39:34.699: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13070179s Nov 7 02:39:36.699: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130769763s Nov 7 02:39:38.699: INFO: Pod "var-expansion-9467e2a1-ceb2-40df-9bcd-b1e56c53ab27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130625563s ... skipping 905 lines ... Nov 7 03:00:15.539: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:27.479: INFO: RC test-deployment: sending request to consume 250 MB Nov 7 03:00:27.479: INFO: ConsumeMem URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5386/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 7 03:00:35.535: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:55.535: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:55.597: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:55.597: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 03:00:55.597: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00358fe68, {0x74a0e0e?, 0xc003605da0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 7 03:01:11.800: INFO: Latency metrics for node capz-conf-n64xz [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-5386" for this suite. 11/07/22 03:01:11.801 ------------------------------ • [FAILED] [972.575 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:153 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:157 ... skipping 149 lines ... Nov 7 03:00:15.539: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:27.479: INFO: RC test-deployment: sending request to consume 250 MB Nov 7 03:00:27.479: INFO: ConsumeMem URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-5386/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 7 03:00:35.535: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:55.535: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:55.597: INFO: waiting for 3 replicas (current: 2) Nov 7 03:00:55.597: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 03:00:55.597: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00358fe68, {0x74a0e0e?, 0xc003605da0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 3763 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-470b9476-bf5c-4c9f-a48f-3edfc42e89ee 11/07/22 03:44:54.747 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 03:44:54.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 03:44:54.876741 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 231 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-470b9476-bf5c-4c9f-a48f-3edfc42e89ee 11/07/22 03:44:54.747 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 03:44:54.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 03:44:54.876741 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 983 lines ... STEP: Destroying namespace "horizontal-pod-autoscaling-8236" for this suite. 11/07/22 04:10:28.43 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/07/22 04:10:28.51 Nov 7 04:10:28.510: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig I1107 04:10:28.511756 15004 discovery.go:214] Invalidating discovery information ... skipping 8 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/07/22 04:10:28.824 I1107 04:10:28.824636 15004 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:10:28.824659 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:10:28.946019 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 7 04:10:29.014: INFO: Waiting up to 2m0s for pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" in namespace "var-expansion-6344" to be "container 0 failed with reason CreateContainerConfigError" Nov 7 04:10:29.076: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 61.222193ms Nov 7 04:10:31.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124016705s Nov 7 04:10:33.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123383384s Nov 7 04:10:35.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123278247s Nov 7 04:10:35.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 7 04:10:35.138: INFO: Deleting pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" in namespace "var-expansion-6344" Nov 7 04:10:35.207: INFO: Wait up to 5m0s for pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 7 04:10:37.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 4 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-6344" for this suite. 11/07/22 04:10:37.397 ------------------------------ • [SLOW TEST] [8.953 seconds] [sig-node] Variable Expansion test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/07/22 04:10:28.51 ... skipping 10 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/07/22 04:10:28.824 I1107 04:10:28.824636 15004 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:10:28.824659 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:10:28.946019 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 7 04:10:29.014: INFO: Waiting up to 2m0s for pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" in namespace "var-expansion-6344" to be "container 0 failed with reason CreateContainerConfigError" Nov 7 04:10:29.076: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 61.222193ms Nov 7 04:10:31.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124016705s Nov 7 04:10:33.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123383384s Nov 7 04:10:35.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123278247s Nov 7 04:10:35.138: INFO: Pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 7 04:10:35.138: INFO: Deleting pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" in namespace "var-expansion-6344" Nov 7 04:10:35.207: INFO: Wait up to 5m0s for pod "var-expansion-465ab2b5-8436-418f-9271-26fd3304508a" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 7 04:10:37.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 960 lines ... STEP: Destroying namespace "gc-8787" for this suite. 11/07/22 04:16:18.343 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/07/22 04:16:18.413 Nov 7 04:16:18.414: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig I1107 04:16:18.414888 15004 discovery.go:214] Invalidating discovery information ... skipping 8 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/07/22 04:16:18.724 I1107 04:16:18.724396 15004 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:16:18.724418 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:16:18.845649 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Nov 7 04:16:18.919: INFO: Waiting up to 2m0s for pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" in namespace "var-expansion-533" to be "container 0 failed with reason CreateContainerConfigError" Nov 7 04:16:18.980: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 61.454976ms Nov 7 04:16:21.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124603668s Nov 7 04:16:23.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124835148s Nov 7 04:16:25.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123761321s Nov 7 04:16:27.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12493268s Nov 7 04:16:29.042: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123522825s ... skipping 29 lines ... Nov 7 04:17:29.042: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.123018343s Nov 7 04:17:31.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.124705254s Nov 7 04:17:33.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.125087597s Nov 7 04:17:35.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.123996411s Nov 7 04:17:37.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.124233252s Nov 7 04:17:39.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.125193646s Nov 7 04:17:39.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 7 04:17:39.044: INFO: Deleting pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" in namespace "var-expansion-533" Nov 7 04:17:39.113: INFO: Wait up to 5m0s for pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 7 04:17:41.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 4 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-533" for this suite. 11/07/22 04:17:41.308 ------------------------------ • [SLOW TEST] [82.960 seconds] [sig-node] Variable Expansion test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/07/22 04:16:18.413 ... skipping 10 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/07/22 04:16:18.724 I1107 04:16:18.724396 15004 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:16:18.724418 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 04:16:18.845649 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Nov 7 04:16:18.919: INFO: Waiting up to 2m0s for pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" in namespace "var-expansion-533" to be "container 0 failed with reason CreateContainerConfigError" Nov 7 04:16:18.980: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 61.454976ms Nov 7 04:16:21.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124603668s Nov 7 04:16:23.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124835148s Nov 7 04:16:25.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123761321s Nov 7 04:16:27.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12493268s Nov 7 04:16:29.042: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123522825s ... skipping 29 lines ... Nov 7 04:17:29.042: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.123018343s Nov 7 04:17:31.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.124705254s Nov 7 04:17:33.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.125087597s Nov 7 04:17:35.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.123996411s Nov 7 04:17:37.043: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.124233252s Nov 7 04:17:39.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.125193646s Nov 7 04:17:39.044: INFO: Pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 7 04:17:39.044: INFO: Deleting pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" in namespace "var-expansion-533" Nov 7 04:17:39.113: INFO: Wait up to 5m0s for pod "var-expansion-51e6ffbe-b09e-486b-84ba-6acc46ac9211" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 7 04:17:41.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 613 lines ... Nov 7 04:35:16.420: INFO: RC test-deployment: sending request to consume 250 millicores Nov 7 04:35:16.420: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-67/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 04:35:24.504: INFO: waiting for 3 replicas (current: 2) I1107 04:35:26.615923 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 7 items received Nov 7 04:35:44.505: INFO: waiting for 3 replicas (current: 2) Nov 7 04:35:44.568: INFO: waiting for 3 replicas (current: 2) Nov 7 04:35:44.568: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 04:35:44.568: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00582fe68, {0x74a0e0e?, 0xc0025521e0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 7 04:36:01.114: INFO: Latency metrics for node capz-conf-n64xz [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-67" for this suite. 11/07/22 04:36:01.114 ------------------------------ • [FAILED] [942.873 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:48 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 ... skipping 149 lines ... Nov 7 04:35:16.420: INFO: RC test-deployment: sending request to consume 250 millicores Nov 7 04:35:16.420: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-67/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 04:35:24.504: INFO: waiting for 3 replicas (current: 2) I1107 04:35:26.615923 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 7 items received Nov 7 04:35:44.505: INFO: waiting for 3 replicas (current: 2) Nov 7 04:35:44.568: INFO: waiting for 3 replicas (current: 2) Nov 7 04:35:44.568: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 04:35:44.568: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00582fe68, {0x74a0e0e?, 0xc0025521e0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 578 lines ... I1107 04:42:05.511813 15004 reflector.go:227] Stopping reflector *v1.Event (0s) from test/e2e/scheduling/events.go:98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 04:42:05.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 04:42:05.640071 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 61 lines ... I1107 04:42:05.511813 15004 reflector.go:227] Stopping reflector *v1.Event (0s) from test/e2e/scheduling/events.go:98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 04:42:05.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 04:42:05.640071 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 166 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-c3c16d09-8308-43fb-a82d-c3fdaa14a333 11/07/22 04:42:24.317 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 04:42:24.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 04:42:24.445597 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 76 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-c3c16d09-8308-43fb-a82d-c3fdaa14a333 11/07/22 04:42:24.317 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 7 04:42:24.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 I1107 04:42:24.445597 15004 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 156 lines ... Nov 7 04:57:10.764: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:23.754: INFO: RC test-deployment: sending request to consume 250 MB Nov 7 04:57:23.754: INFO: ConsumeMem URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2832/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 7 04:57:30.763: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:50.764: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:50.826: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:50.826: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 04:57:50.827: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002f77e68, {0x74a0e0e?, 0xc004f08c60?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 7 04:58:07.019: INFO: Latency metrics for node capz-conf-n64xz [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-2832" for this suite. 11/07/22 04:58:07.019 ------------------------------ • [FAILED] [942.571 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:153 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:154 ... skipping 150 lines ... Nov 7 04:57:10.764: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:23.754: INFO: RC test-deployment: sending request to consume 250 MB Nov 7 04:57:23.754: INFO: ConsumeMem URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2832/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 7 04:57:30.763: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:50.764: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:50.826: INFO: waiting for 3 replicas (current: 2) Nov 7 04:57:50.826: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 04:57:50.827: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002f77e68, {0x74a0e0e?, 0xc004f08c60?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x747b2ea, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 1355 lines ... Nov 7 05:22:02.780: INFO: RC test-deployment: sending request to consume 250 millicores Nov 7 05:22:02.781: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3207/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 05:22:09.850: INFO: waiting for 3 replicas (current: 2) I1107 05:22:21.152452 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 7 items received Nov 7 05:22:29.848: INFO: waiting for 3 replicas (current: 2) Nov 7 05:22:29.910: INFO: waiting for 3 replicas (current: 2) Nov 7 05:22:29.910: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 05:22:29.910: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc004a55e68, {0x74a0e0e?, 0xc001f034a0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 7 05:22:46.123: INFO: Latency metrics for node capz-conf-n64xz [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-3207" for this suite. 11/07/22 05:22:46.123 ------------------------------ • [FAILED] [942.788 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:48 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:55 ... skipping 149 lines ... Nov 7 05:22:02.780: INFO: RC test-deployment: sending request to consume 250 millicores Nov 7 05:22:02.781: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3207/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 7 05:22:09.850: INFO: waiting for 3 replicas (current: 2) I1107 05:22:21.152452 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 7 items received Nov 7 05:22:29.848: INFO: waiting for 3 replicas (current: 2) Nov 7 05:22:29.910: INFO: waiting for 3 replicas (current: 2) Nov 7 05:22:29.910: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 05:22:29.910: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc004a55e68, {0x74a0e0e?, 0xc001f034a0?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, 0xc000de4d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74a0e0e?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x7487b2e, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 878 lines ... STEP: Destroying namespace "daemonsets-5886" for this suite. 11/07/22 05:24:51.452 << End Captured GinkgoWriter Output ------------------------------ SS ------------------------------ [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/07/22 05:24:51.52 ... skipping 12 lines ... I1107 05:24:51.836651 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 05:24:51.959256 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Nov 7 05:24:52.290: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 7 05:24:52.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 6 lines ... ------------------------------ • [0.905 seconds] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:27 attempt to deploy past allocatable memory limits test/e2e/windows/memory_limits.go:59 should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 14 lines ... I1107 05:24:51.836651 15004 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1107 05:24:51.959256 15004 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Nov 7 05:24:52.290: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 7 05:24:52.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 34 lines ... Nov 7 05:24:52.879: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig Nov 7 05:24:55.292: INFO: created owner resource "owner2w7wf" Nov 7 05:24:55.364: INFO: created dependent resource "dependentc6qbt" STEP: wait for the owner to be deleted 11/07/22 05:24:55.434 I1107 05:24:56.082992 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 8 items received STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd 11/07/22 05:25:15.497 I1107 05:25:45.753245 15004 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 7 05:25:45.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 33 lines ... Nov 7 05:24:52.879: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig Nov 7 05:24:55.292: INFO: created owner resource "owner2w7wf" Nov 7 05:24:55.364: INFO: created dependent resource "dependentc6qbt" STEP: wait for the owner to be deleted 11/07/22 05:24:55.434 I1107 05:24:56.082992 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 8 items received STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd 11/07/22 05:25:15.497 I1107 05:25:45.753245 15004 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 7 05:25:45.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 52 lines ... Nov 7 05:42:52.433: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-11-07 05:37:41 +0000 UTC restartedAt=2022-11-07 05:42:51 +0000 UTC (5m10s) I1107 05:43:20.336812 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 7 items received I1107 05:47:11.268507 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 7 items received Nov 7 05:48:08.211: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-11-07 05:42:56 +0000 UTC restartedAt=2022-11-07 05:48:07 +0000 UTC (5m11s) STEP: getting restart delay after a capped delay 11/07/22 05:48:08.211 I1107 05:50:13.399757 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 8 items received {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 5h0m0s timeout","severity":"error","time":"2022-11-07T05:50:38Z"} ++ early_exit_handler ++ '[' -n 163 ']' ++ kill -TERM 163 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 360 lines ... Nov 7 05:54:29.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 7 05:54:29.995: INFO: stderr: "" Nov 7 05:54:29.995: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 7 05:54:29.995: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 7 05:54:29.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-3778-webhook' Nov 7 05:54:30.355: INFO: stderr: "" Nov 7 05:54:30.355: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-3778-webhook\" deleted\n" Nov 7 05:54:30.355: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-3778-webhook" deleted error:%!s(<nil>) Nov 7 05:54:30.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig --namespace=gmsa-full-test-windows-3778 exec --namespace=gmsa-full-test-windows-3778 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 7 05:54:36.505: INFO: stderr: "" Nov 7 05:54:36.506: INFO: stdout: "namespace \"gmsa-full-test-windows-3778-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-3778-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 7 05:54:36.506: INFO: stdout:namespace "gmsa-full-test-windows-3778-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-3778-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 7 05:54:36.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 145 lines ... Nov 7 05:54:29.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 7 05:54:29.995: INFO: stderr: "" Nov 7 05:54:29.995: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 7 05:54:29.995: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 7 05:54:29.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-3778-webhook' Nov 7 05:54:30.355: INFO: stderr: "" Nov 7 05:54:30.355: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-3778-webhook\" deleted\n" Nov 7 05:54:30.355: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-3778-webhook" deleted error:%!s(<nil>) Nov 7 05:54:30.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig --namespace=gmsa-full-test-windows-3778 exec --namespace=gmsa-full-test-windows-3778 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 7 05:54:36.505: INFO: stderr: "" Nov 7 05:54:36.506: INFO: stdout: "namespace \"gmsa-full-test-windows-3778-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-3778-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 7 05:54:36.506: INFO: stdout:namespace "gmsa-full-test-windows-3778-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-3778-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-3778-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 7 05:54:36.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 56 lines ... Nov 7 05:55:12.668: INFO: RC rs: consume custom metric 0 in total Nov 7 05:55:12.668: INFO: RC rs: disabling consumption of custom metric QPS Nov 7 05:55:12.802: INFO: waiting for 3 replicas (current: 5) Nov 7 05:55:22.109: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-hd8wg to /logs/artifacts/clusters/capz-conf-ie6pqe/machines/capz-conf-ie6pqe-md-win-744996d99c-pzgf7/crashdumps.tar Nov 7 05:55:23.729: INFO: Collecting boot logs for AzureMachine capz-conf-ie6pqe-md-win-hd8wg Failed to get logs for machine capz-conf-ie6pqe-md-win-744996d99c-pzgf7, cluster default/capz-conf-ie6pqe: getting a new sftp client connection: ssh: subsystem request failed Nov 7 05:55:25.089: INFO: Collecting logs for Windows node capz-conf-n64xz in cluster capz-conf-ie6pqe in namespace default Nov 7 05:55:32.867: INFO: waiting for 3 replicas (current: 5) Nov 7 05:55:42.765: INFO: RC rs: sending request to consume 325 millicores Nov 7 05:55:42.765: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 7 05:55:52.865: INFO: waiting for 3 replicas (current: 5) ... skipping 12 lines ... Nov 7 05:57:43.081: INFO: RC rs: sending request to consume 325 millicores Nov 7 05:57:43.082: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } Nov 7 05:57:52.865: INFO: waiting for 3 replicas (current: 5) Nov 7 05:57:58.500: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-n64xz to /logs/artifacts/clusters/capz-conf-ie6pqe/machines/capz-conf-ie6pqe-md-win-744996d99c-vmmzz/crashdumps.tar Nov 7 05:58:00.182: INFO: Collecting boot logs for AzureMachine capz-conf-ie6pqe-md-win-n64xz Failed to get logs for machine capz-conf-ie6pqe-md-win-744996d99c-vmmzz, cluster default/capz-conf-ie6pqe: getting a new sftp client connection: ssh: subsystem request failed [1mSTEP[0m: Dumping workload cluster default/capz-conf-ie6pqe kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-q5xml, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-wf56q, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-sfdlq, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/coredns-fdff55fb9-89q7m [1mSTEP[0m: failed to find events of Pod "coredns-fdff55fb9-89q7m" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-56c5ff4bf8-sq7xp, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-fdff55fb9-89q7m, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-59g6l, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-q5xml [1mSTEP[0m: failed to find events of Pod "calico-node-q5xml" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-cx4hp, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-56c5ff4bf8-sq7xp [1mSTEP[0m: failed to find events of Pod "calico-kube-controllers-56c5ff4bf8-sq7xp" [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-59g6l [1mSTEP[0m: failed to find events of Pod "containerd-logger-59g6l" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-wf56q [1mSTEP[0m: failed to find events of Pod "calico-node-windows-wf56q" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-cx4hp [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-cx4hp, container calico-node-felix [1mSTEP[0m: failed to find events of Pod "calico-node-windows-cx4hp" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-fdff55fb9-h77rw, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-sfdlq [1mSTEP[0m: failed to find events of Pod "containerd-logger-sfdlq" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-wf56q, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/coredns-fdff55fb9-h77rw [1mSTEP[0m: failed to find events of Pod "coredns-fdff55fb9-h77rw" [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-lb85p, container csi-proxy [1mSTEP[0m: Fetching kube-system pod logs took 958.798354ms [1mSTEP[0m: Dumping workload cluster default/capz-conf-ie6pqe Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-ie6pqe-control-plane-8mddd, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-9dxnq [1mSTEP[0m: failed to find events of Pod "kube-proxy-windows-9dxnq" [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-ie6pqe-control-plane-8mddd [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-zn9xq [1mSTEP[0m: Collecting events for Pod kube-system/etcd-capz-conf-ie6pqe-control-plane-8mddd [1mSTEP[0m: Creating log watcher for controller kube-system/metrics-server-954b56d74-7gv5b, container metrics-server [1mSTEP[0m: Collecting events for Pod kube-system/metrics-server-954b56d74-7gv5b [1mSTEP[0m: failed to find events of Pod "kube-proxy-zn9xq" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-bcdlz, container kube-proxy [1mSTEP[0m: failed to find events of Pod "etcd-capz-conf-ie6pqe-control-plane-8mddd" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-ie6pqe-control-plane-8mddd, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "metrics-server-954b56d74-7gv5b" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-bcdlz [1mSTEP[0m: failed to find events of Pod "kube-proxy-windows-bcdlz" [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-lb85p [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-ldbvg, container csi-proxy [1mSTEP[0m: failed to find events of Pod "csi-proxy-lb85p" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-ie6pqe-control-plane-8mddd [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-capz-conf-ie6pqe-control-plane-8mddd" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-conf-ie6pqe-control-plane-8mddd, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-ldbvg [1mSTEP[0m: failed to find events of Pod "csi-proxy-ldbvg" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-9dxnq, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-scheduler-capz-conf-ie6pqe-control-plane-8mddd" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-ie6pqe-control-plane-8mddd, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-ie6pqe-control-plane-8mddd [1mSTEP[0m: failed to find events of Pod "kube-apiserver-capz-conf-ie6pqe-control-plane-8mddd" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-zn9xq, container kube-proxy [1mSTEP[0m: Fetching activity logs took 2.350914685s ++ popd /home/prow/go/src/k8s.io/windows-testing ++ /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/log/redact.sh ================ REDACTING LOGS ================ ... skipping 95 lines ... I1107 05:58:45.212817 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 2 items received Nov 7 05:58:45.212: INFO: ConsumeCPU failure: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=325&requestSizeMillicores=100": unexpected EOF I1107 05:58:45.212844 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received Nov 7 05:59:00.214: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } I1107 05:59:16.215374 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 05:59:16.215444 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 05:59:22.805: INFO: Unexpected error: <*url.Error | 0xc003f02d80>: { Op: "Get", URL: "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs", Err: <*net.OpError | 0xc004212b90>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 7 05:59:22.806: FAIL: Get "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs": dial tcp 20.150.157.23:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000) test/e2e/framework/autoscaling/autoscaling_utils.go:442 +0x535 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1() test/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a ... skipping 15 lines ... test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleDown({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:279 +0x21e k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:74 +0x88 E1107 05:59:22.806861 15004 runtime.go:79] Observed a panic: framework.FailurePanic{Message:"Nov 7 05:59:22.806: Get \"https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs\": dial tcp 20.150.157.23:6443: i/o timeout", Filename:"test/e2e/framework/autoscaling/autoscaling_utils.go", Line:442, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:442 +0x535\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1()\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26dd811, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7e81748?, 0xc00012e000?}, 0x3?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7e81748, 0xc00012e000}, 0xc003f53158, 0x2f69aaa?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7e81748, 0xc00012e000}, 0xb0?, 0x2f68645?, 0x10?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7e81748, 0xc00012e000}, 0xc002c01ecc?, 0xc00225fc00?, 0x25c5967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x74748d6?, 0x2?, 0xc002c01ec0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc0032ce000, 0x3, 0x3?)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:478 +0x7f\nk8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000669e68, {0x74748d6?, 0xc0036b8b40?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, 0xc000de4d20)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8\nk8s.io/kubernetes/test/e2e/autoscaling.scaleDown({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:279 +0x21e\nk8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.2()\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:74 +0x88"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ... skipping 2 lines ... k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6fb3a60?, 0xc004a9e6c0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004a9e6c0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6fb3a60, 0xc004a9e6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb5ba0, 0xc000024bd0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc00087a9c0, 0xcd}, {0xc0006696e8?, 0xc0006696f8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.fail({0xc00087a9c0, 0xcd}, {0xc0006697c8?, 0x74744ba?, 0xc0006697e8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0008ca000, 0xb8}, {0xc000669860?, 0xc0008ca000?, 0xc000669888?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4fba0, 0xc003f02d80}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000) ... skipping 41 lines ... I1107 06:00:49.219888 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 4 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:01:00.215: INFO: ConsumeCPU failure: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=325&requestSizeMillicores=100": dial tcp 20.150.157.23:6443: i/o timeout Nov 7 06:01:00.215: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } I1107 06:01:20.220790 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 06:01:20.220953 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:01:30.219: INFO: ConsumeCPU failure: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=325&requestSizeMillicores=100": dial tcp 20.150.157.23:6443: i/o timeout Nov 7 06:01:30.219: INFO: Unexpected error: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 06:01:30.219: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).sendConsumeCPURequest(0xc0032ce000, 0x145) test/e2e/framework/autoscaling/autoscaling_utils.go:368 +0x107 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).makeConsumeCPURequests(0xc0032ce000) test/e2e/framework/autoscaling/autoscaling_utils.go:282 +0x1f7 created by k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer test/e2e/framework/autoscaling/autoscaling_utils.go:238 +0xa3d STEP: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-1103, will wait for the garbage collector to delete the pods 11/07/22 06:01:40.22 I1107 06:01:51.224229 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true I1107 06:01:51.224333 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true Nov 7 06:02:10.221: INFO: Unexpected error: <*url.Error | 0xc002cc4000>: { Op: "Get", URL: "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs", Err: <*net.OpError | 0xc0025b20a0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 7 06:02:10.222: FAIL: Get "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs": dial tcp 20.150.157.23:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).CleanUp(0xc0032ce000) test/e2e/framework/autoscaling/autoscaling_utils.go:546 +0x2a5 panic({0x6fb3a60, 0xc004a9e6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004a9e6c0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7 panic({0x6fb3a60, 0xc004a9e6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb5ba0, 0xc000024bd0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail({0xc00087a9c0, 0xcd}, {0xc0006697c8?, 0x74744ba?, 0xc0006697e8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0008ca000, 0xb8}, {0xc000669860?, 0xc0008ca000?, 0xc000669888?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4fba0, 0xc003f02d80}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000) ... skipping 30 lines ... [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/07/22 06:02:40.228 STEP: Collecting events from namespace "horizontal-pod-autoscaling-1103". 11/07/22 06:02:40.228 I1107 06:02:53.225844 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 06:02:53.226205 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:03:10.231: INFO: Unexpected error: failed to list events in namespace "horizontal-pod-autoscaling-1103": <*url.Error | 0xc0051ccfc0>: { Op: "Get", URL: "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/events", Err: <*net.OpError | 0xc004c545a0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 7 06:03:10.232: FAIL: failed to list events in namespace "horizontal-pod-autoscaling-1103": Get "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/events": dial tcp 20.150.157.23:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00111c5c0, {0xc003d7bdc0, 0x1f}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x7ebd148, 0xc0040c21a0}, {0xc003d7bdc0, 0x1f}) test/e2e/framework/debug/dump.go:62 +0x8d ... skipping 9 lines ... /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-1103" for this suite. 11/07/22 06:03:10.232 I1107 06:03:24.228809 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 9 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 06:03:24.228849 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 9 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:03:40.237: FAIL: Couldn't delete ns: "horizontal-pod-autoscaling-1103": Delete "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103": dial tcp 20.150.157.23:6443: i/o timeout (&url.Error{Op:"Delete", URL:"https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103", Err:(*net.OpError)(0xc004184ff0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000de4d20) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x650bd80?, 0xc003ddd5a0?, 0xc00311d8c0?}, {0x7476102, 0x4}, {0xac7bef8, 0x0, 0xc00311d910?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x650bd80?, 0xc003ddd5a0?, 0x28deb25?}, {0xac7bef8?, 0xc002f25f80?, 0x774c000?}) /usr/local/go/src/reflect/value.go:368 +0xbc ------------------------------ • [FAILED] [543.586 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] ReplicaSet test/e2e/autoscaling/horizontal_pod_autoscaling.go:69 [It] Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod test/e2e/autoscaling/horizontal_pod_autoscaling.go:73 ... skipping 78 lines ... I1107 05:58:45.212817 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 2 items received Nov 7 05:58:45.212: INFO: ConsumeCPU failure: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=325&requestSizeMillicores=100": unexpected EOF I1107 05:58:45.212844 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received Nov 7 05:59:00.214: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } I1107 05:59:16.215374 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 05:59:16.215444 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 05:59:22.805: INFO: Unexpected error: <*url.Error | 0xc003f02d80>: { Op: "Get", URL: "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs", Err: <*net.OpError | 0xc004212b90>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 7 05:59:22.806: FAIL: Get "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs": dial tcp 20.150.157.23:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000) test/e2e/framework/autoscaling/autoscaling_utils.go:442 +0x535 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1() test/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a ... skipping 15 lines ... test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleDown({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:279 +0x21e k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:74 +0x88 E1107 05:59:22.806861 15004 runtime.go:79] Observed a panic: framework.FailurePanic{Message:"Nov 7 05:59:22.806: Get \"https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs\": dial tcp 20.150.157.23:6443: i/o timeout", Filename:"test/e2e/framework/autoscaling/autoscaling_utils.go", Line:442, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:442 +0x535\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1()\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26dd811, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7e81748?, 0xc00012e000?}, 0x3?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7e81748, 0xc00012e000}, 0xc003f53158, 0x2f69aaa?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7e81748, 0xc00012e000}, 0xb0?, 0x2f68645?, 0x10?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7e81748, 0xc00012e000}, 0xc002c01ecc?, 0xc00225fc00?, 0x25c5967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x74748d6?, 0x2?, 0xc002c01ec0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc0032ce000, 0x3, 0x3?)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:478 +0x7f\nk8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000669e68, {0x74748d6?, 0xc0036b8b40?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, 0xc000de4d20)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8\nk8s.io/kubernetes/test/e2e/autoscaling.scaleDown({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:279 +0x21e\nk8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.2()\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:74 +0x88"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ... skipping 2 lines ... k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6fb3a60?, 0xc004a9e6c0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004a9e6c0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6fb3a60, 0xc004a9e6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb5ba0, 0xc000024bd0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc00087a9c0, 0xcd}, {0xc0006696e8?, 0xc0006696f8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.fail({0xc00087a9c0, 0xcd}, {0xc0006697c8?, 0x74744ba?, 0xc0006697e8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0008ca000, 0xb8}, {0xc000669860?, 0xc0008ca000?, 0xc000669888?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4fba0, 0xc003f02d80}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000) ... skipping 41 lines ... I1107 06:00:49.219888 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 4 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:01:00.215: INFO: ConsumeCPU failure: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=325&requestSizeMillicores=100": dial tcp 20.150.157.23:6443: i/o timeout Nov 7 06:01:00.215: INFO: ConsumeCPU URL: {https capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=325&requestSizeMillicores=100 } I1107 06:01:20.220790 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 06:01:20.220953 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:01:30.219: INFO: ConsumeCPU failure: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/services/rs-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=325&requestSizeMillicores=100": dial tcp 20.150.157.23:6443: i/o timeout Nov 7 06:01:30.219: INFO: Unexpected error: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 06:01:30.219: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).sendConsumeCPURequest(0xc0032ce000, 0x145) test/e2e/framework/autoscaling/autoscaling_utils.go:368 +0x107 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).makeConsumeCPURequests(0xc0032ce000) test/e2e/framework/autoscaling/autoscaling_utils.go:282 +0x1f7 created by k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer test/e2e/framework/autoscaling/autoscaling_utils.go:238 +0xa3d STEP: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-1103, will wait for the garbage collector to delete the pods 11/07/22 06:01:40.22 I1107 06:01:51.224229 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true I1107 06:01:51.224333 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true Nov 7 06:02:10.221: INFO: Unexpected error: <*url.Error | 0xc002cc4000>: { Op: "Get", URL: "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs", Err: <*net.OpError | 0xc0025b20a0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 7 06:02:10.222: FAIL: Get "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/apis/apps/v1/namespaces/horizontal-pod-autoscaling-1103/replicasets/rs": dial tcp 20.150.157.23:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).CleanUp(0xc0032ce000) test/e2e/framework/autoscaling/autoscaling_utils.go:546 +0x2a5 panic({0x6fb3a60, 0xc004a9e6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc004a9e6c0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7 panic({0x6fb3a60, 0xc004a9e6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail.func1() test/e2e/framework/log.go:106 +0x7d panic({0x6fb5ba0, 0xc000024bd0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.fail({0xc00087a9c0, 0xcd}, {0xc0006697c8?, 0x74744ba?, 0xc0006697e8?}) test/e2e/framework/log.go:110 +0x1b4 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0008ca000, 0xb8}, {0xc000669860?, 0xc0008ca000?, 0xc000669888?}) test/e2e/framework/log.go:62 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7e4fba0, 0xc003f02d80}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc0032ce000) ... skipping 30 lines ... [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/07/22 06:02:40.228 STEP: Collecting events from namespace "horizontal-pod-autoscaling-1103". 11/07/22 06:02:40.228 I1107 06:02:53.225844 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 06:02:53.226205 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:03:10.231: INFO: Unexpected error: failed to list events in namespace "horizontal-pod-autoscaling-1103": <*url.Error | 0xc0051ccfc0>: { Op: "Get", URL: "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/events", Err: <*net.OpError | 0xc004c545a0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 7 06:03:10.232: FAIL: failed to list events in namespace "horizontal-pod-autoscaling-1103": Get "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/events": dial tcp 20.150.157.23:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00111c5c0, {0xc003d7bdc0, 0x1f}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x7ebd148, 0xc0040c21a0}, {0xc003d7bdc0, 0x1f}) test/e2e/framework/debug/dump.go:62 +0x8d ... skipping 9 lines ... /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-1103" for this suite. 11/07/22 06:03:10.232 I1107 06:03:24.228809 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 9 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true I1107 06:03:24.228849 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 9 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true Nov 7 06:03:40.237: FAIL: Couldn't delete ns: "horizontal-pod-autoscaling-1103": Delete "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103": dial tcp 20.150.157.23:6443: i/o timeout (&url.Error{Op:"Delete", URL:"https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103", Err:(*net.OpError)(0xc004184ff0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000de4d20) test/e2e/framework/framework.go:383 +0x1ca ... skipping 30 lines ... k8s.io/kubernetes/test/e2e/autoscaling.scaleDown({0x74748d6?, 0x61a2e85?}, {{0x74765e2, 0x4}, {0x747f766, 0x7}, {0x74886f0, 0xa}}, {0x7475836, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:279 +0x21e k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:74 +0x88 There were additional failures detected after the initial failure: [FAILED] Nov 7 06:03:10.232: failed to list events in namespace "horizontal-pod-autoscaling-1103": Get "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103/events": dial tcp 20.150.157.23:6443: i/o timeout In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00111c5c0, {0xc003d7bdc0, 0x1f}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x7ebd148, 0xc0040c21a0}, {0xc003d7bdc0, 0x1f}) ... skipping 6 lines ... test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x650bd80?, 0xc003ddd620?, 0x26de545?}, {0x7476102, 0x4}, {0xac7bef8, 0x0, 0xc00482a130?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x650bd80?, 0xc003ddd620?, 0x0?}, {0xac7bef8?, 0xc004a0af68?, 0x2626699?}) /usr/local/go/src/reflect/value.go:368 +0xbc ---------- [FAILED] Nov 7 06:03:40.237: Couldn't delete ns: "horizontal-pod-autoscaling-1103": Delete "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103": dial tcp 20.150.157.23:6443: i/o timeout (&url.Error{Op:"Delete", URL:"https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-1103", Err:(*net.OpError)(0xc004184ff0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370 Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000de4d20) ... skipping 13 lines ... STEP: Creating a kubernetes client 11/07/22 06:03:40.238 Nov 7 06:03:40.238: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-ie6pqe.kubeconfig I1107 06:03:40.239857 15004 discovery.go:214] Invalidating discovery information STEP: Building a namespace api object, basename horizontal-pod-autoscaling 11/07/22 06:03:40.239 I1107 06:03:55.229721 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 10 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=8m28s&timeoutSeconds=508&watch=true I1107 06:03:55.230097 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 10 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=8m5s&timeoutSeconds=485&watch=true Nov 7 06:04:10.244: INFO: Unexpected error while creating namespace: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp 20.150.157.23:6443: i/o timeout I1107 06:04:25.230833 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received I1107 06:04:25.230883 15004 reflector.go:559] test/e2e/node/taints.go:147: Watch close - *v1.Pod total 0 items received Nov 7 06:04:42.246: INFO: Unexpected error while creating namespace: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp 20.150.157.23:6443: i/o timeout I1107 06:04:56.232154 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=5m8s&timeoutSeconds=308&watch=true I1107 06:04:56.232206 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=5m33s&timeoutSeconds=333&watch=true Nov 7 06:05:12.248: INFO: Unexpected error while creating namespace: Post "https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp 20.150.157.23:6443: i/o timeout Nov 7 06:05:12.248: INFO: Unexpected error: <*errors.errorString | 0xc000205c90>: { s: "timed out waiting for the condition", } Nov 7 06:05:12.248: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000de4f00) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 7 06:05:12.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1107 06:05:27.233783 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-5857/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=39423&timeout=5m33s&timeoutSeconds=333&watch=true I1107 06:05:27.233814 15004 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://capz-conf-ie6pqe-15539a2b.westus3.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-6429/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=39381&timeout=5m8s&timeoutSeconds=308&watch=true {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-11-07T06:05:38Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-11-07T06:05:38Z"}