Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 5h15m |
Revision | main |
... skipping 59 lines ... Thu, 10 Nov 2022 00:52:44 +0000: running gmsa setup Thu, 10 Nov 2022 00:52:44 +0000: setting up domain vm in gmsa-dc-9678 with keyvault capz-ci-gmsa make: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' GOBIN=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin ./scripts/go_install.sh github.com/drone/envsubst/v2/cmd/envsubst envsubst v2.0.0-20210730161058-179042472c46 go: downloading github.com/drone/envsubst/v2 v2.0.0-20210730161058-179042472c46 make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' WARNING: Failed to query a3dadaa5-8e1b-459e-abb2-f4b9241bf73a by invoking Graph API. If you don't have permission to query Graph API, please specify --assignee-object-id and --assignee-principal-type. WARNING: Assuming a3dadaa5-8e1b-459e-abb2-f4b9241bf73a as an object ID. Pre-reqs are met for creating Domain vm { "id": "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/gmsa-dc-9678", "location": "northeurope", "managedBy": null, ... skipping 3 lines ... }, "tags": { "creationTimestamp": "2022-11-10T00:52:56Z" }, "type": "Microsoft.Resources/resourceGroups" } ERROR: (ResourceNotFound) The Resource 'Microsoft.Compute/virtualMachines/dc-9678' under resource group 'gmsa-dc-9678' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Code: ResourceNotFound Message: The Resource 'Microsoft.Compute/virtualMachines/dc-9678' under resource group 'gmsa-dc-9678' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Creating Domain vm WARNING: It is recommended to use parameter "--public-ip-sku Standard" to create new VM with Standard public IP. Please note that the default public IP used for VM creation will be changed from Basic to Standard in the future. { "fqdns": "", ... skipping 13 lines ... "privateIpAddress": "172.16.0.4", "publicIpAddress": "", "resourceGroup": "gmsa-dc-9678", "zones": "" } WARNING: Command group 'network bastion' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus ERROR: (ResourceNotFound) The Resource 'Microsoft.Network/bastionHosts/gmsa-bastion' under resource group 'gmsa-dc-9678' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Code: ResourceNotFound Message: The Resource 'Microsoft.Network/bastionHosts/gmsa-bastion' under resource group 'gmsa-dc-9678' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix Thu, 10 Nov 2022 00:54:51 +0000: starting to create cluster WARNING: The installed extension 'capi' is in preview. Using ./capz/templates/gmsa.yaml WARNING: Command group 'capi' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus ... skipping 5 lines ... WARNING: Merged "capi-manager" as current context in /root/.kube/config WARNING: ✓ Obtained AKS credentials WARNING: ✓ Created Cluster Identity Secret WARNING: ✓ Initialized management cluster WARNING: ✓ Generated workload cluster configuration at "capz-conf-jylo7u.yaml" WARNING: ✓ Created workload cluster "capz-conf-jylo7u" Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found Error: "capz-conf-jylo7u-kubeconfig" not found in namespace "default": secrets "capz-conf-jylo7u-kubeconfig" not found WARNING: ✓ Workload cluster is accessible WARNING: ✓ Workload access configuration written to "capz-conf-jylo7u.kubeconfig" WARNING: ✓ Deployed CNI to workload cluster WARNING: ✓ Deployed Windows Calico support to workload cluster WARNING: ✓ Deployed Windows kube-proxy support to workload cluster WARNING: ✓ Workload cluster is ready ... skipping 1406 lines ... STEP: Destroying namespace "daemonsets-1272" for this suite. 11/10/22 01:20:28.282 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/10/22 01:20:28.393 Nov 10 01:20:28.393: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig I1110 01:20:28.394583 14266 discovery.go:214] Invalidating discovery information ... skipping 8 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/10/22 01:20:28.912 I1110 01:20:28.912575 14266 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 01:20:28.912585 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 01:20:29.113705 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 10 01:20:29.229: INFO: Waiting up to 2m0s for pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" in namespace "var-expansion-203" to be "container 0 failed with reason CreateContainerConfigError" Nov 10 01:20:29.330: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40": Phase="Pending", Reason="", readiness=false. Elapsed: 101.294911ms Nov 10 01:20:31.434: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205291147s Nov 10 01:20:33.451: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222464332s Nov 10 01:20:33.452: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 10 01:20:33.452: INFO: Deleting pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" in namespace "var-expansion-203" Nov 10 01:20:33.562: INFO: Wait up to 5m0s for pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 10 01:20:37.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 4 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-203" for this suite. 11/10/22 01:20:37.902 ------------------------------ • [SLOW TEST] [9.654 seconds] [sig-node] Variable Expansion test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/10/22 01:20:28.393 ... skipping 10 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/10/22 01:20:28.912 I1110 01:20:28.912575 14266 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 01:20:28.912585 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 01:20:29.113705 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] test/e2e/common/node/expansion.go:186 Nov 10 01:20:29.229: INFO: Waiting up to 2m0s for pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" in namespace "var-expansion-203" to be "container 0 failed with reason CreateContainerConfigError" Nov 10 01:20:29.330: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40": Phase="Pending", Reason="", readiness=false. Elapsed: 101.294911ms Nov 10 01:20:31.434: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205291147s Nov 10 01:20:33.451: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222464332s Nov 10 01:20:33.452: INFO: Pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 10 01:20:33.452: INFO: Deleting pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" in namespace "var-expansion-203" Nov 10 01:20:33.562: INFO: Wait up to 5m0s for pod "var-expansion-06381d74-d3bd-4e87-84fe-0feeb09b9c40" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 10 01:20:37.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 559 lines ... Nov 10 01:40:10.087: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:24.083: INFO: RC test-deployment: sending request to consume 250 MB Nov 10 01:40:24.083: INFO: ConsumeMem URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7863/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 10 01:40:30.086: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:50.085: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:50.187: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:50.187: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 01:40:50.187: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000a4de68, {0x751dca3?, 0xc0047e87e0?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f7fdb, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 10 01:41:08.055: INFO: Latency metrics for node capz-conf-v5lj5 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-7863" for this suite. 11/10/22 01:41:08.055 ------------------------------ • [FAILED] [945.029 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:153 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:154 ... skipping 146 lines ... Nov 10 01:40:10.087: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:24.083: INFO: RC test-deployment: sending request to consume 250 MB Nov 10 01:40:24.083: INFO: ConsumeMem URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7863/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 10 01:40:30.086: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:50.085: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:50.187: INFO: waiting for 3 replicas (current: 2) Nov 10 01:40:50.187: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 01:40:50.187: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000a4de68, {0x751dca3?, 0xc0047e87e0?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f7fdb, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 1170 lines ... test/e2e/apimachinery/garbage_collector.go:1040 Nov 10 01:55:25.077: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig Nov 10 01:55:27.745: INFO: created owner resource "ownerkjg4q" Nov 10 01:55:27.868: INFO: created dependent resource "dependentxwklt" STEP: wait for the owner to be deleted 11/10/22 01:55:27.976 STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd 11/10/22 01:55:48.085 I1110 01:56:18.495113 14266 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 10 01:56:18.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 32 lines ... test/e2e/apimachinery/garbage_collector.go:1040 Nov 10 01:55:25.077: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig Nov 10 01:55:27.745: INFO: created owner resource "ownerkjg4q" Nov 10 01:55:27.868: INFO: created dependent resource "dependentxwklt" STEP: wait for the owner to be deleted 11/10/22 01:55:27.976 STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd 11/10/22 01:55:48.085 I1110 01:56:18.495113 14266 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 10 01:56:18.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 140 lines ... STEP: Destroying namespace "sched-preemption-8977" for this suite. 11/10/22 01:57:39.33 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/10/22 01:57:39.439 Nov 10 01:57:39.439: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig I1110 01:57:39.441538 14266 discovery.go:214] Invalidating discovery information ... skipping 10 lines ... I1110 01:57:39.990283 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 01:57:40.193590 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 STEP: Creating a simple DaemonSet "daemon-set" 11/10/22 01:57:40.682 STEP: Check that daemon pods launch on every node of the cluster. 11/10/22 01:57:40.788 Nov 10 01:57:40.976: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:41.092: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 10 01:57:41.093: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 ... skipping 21 lines ... Nov 10 01:57:49.203: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:49.307: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:49.307: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 Nov 10 01:57:50.203: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:50.306: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 10 01:57:50.306: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 11/10/22 01:57:50.408 Nov 10 01:57:50.781: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:50.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:50.984: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 Nov 10 01:57:52.094: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:52.223: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:52.223: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 ... skipping 18 lines ... Nov 10 01:57:59.095: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:59.235: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:59.235: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 Nov 10 01:58:00.095: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:58:00.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 10 01:58:00.227: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 11/10/22 01:58:00.227 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 STEP: Deleting DaemonSet "daemon-set" 11/10/22 01:58:00.483 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4824, will wait for the garbage collector to delete the pods 11/10/22 01:58:00.483 I1110 01:58:00.585364 14266 reflector.go:221] Starting reflector *v1.Pod (0s) from test/utils/pod_store.go:57 I1110 01:58:00.585403 14266 reflector.go:257] Listing and watching *v1.Pod from test/utils/pod_store.go:57 ... skipping 19 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-4824" for this suite. 11/10/22 01:58:06.544 ------------------------------ • [SLOW TEST] [27.229 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/10/22 01:57:39.439 ... skipping 12 lines ... I1110 01:57:39.990283 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 01:57:40.193590 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:146 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:294 STEP: Creating a simple DaemonSet "daemon-set" 11/10/22 01:57:40.682 STEP: Check that daemon pods launch on every node of the cluster. 11/10/22 01:57:40.788 Nov 10 01:57:40.976: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:41.092: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Nov 10 01:57:41.093: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 ... skipping 21 lines ... Nov 10 01:57:49.203: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:49.307: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:49.307: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 Nov 10 01:57:50.203: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:50.306: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 10 01:57:50.306: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 11/10/22 01:57:50.408 Nov 10 01:57:50.781: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:50.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:50.984: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 Nov 10 01:57:52.094: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:52.223: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:52.223: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 ... skipping 18 lines ... Nov 10 01:57:59.095: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:57:59.235: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Nov 10 01:57:59.235: INFO: Node capz-conf-8xqtq is running 0 daemon pod, expected 1 Nov 10 01:58:00.095: INFO: DaemonSet pods can't tolerate node capz-conf-jylo7u-control-plane-h282z with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Nov 10 01:58:00.227: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Nov 10 01:58:00.227: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 11/10/22 01:58:00.227 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:111 STEP: Deleting DaemonSet "daemon-set" 11/10/22 01:58:00.483 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4824, will wait for the garbage collector to delete the pods 11/10/22 01:58:00.483 I1110 01:58:00.585364 14266 reflector.go:221] Starting reflector *v1.Pod (0s) from test/utils/pod_store.go:57 I1110 01:58:00.585403 14266 reflector.go:257] Listing and watching *v1.Pod from test/utils/pod_store.go:57 ... skipping 47 lines ... [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 10 01:58:07.426: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig Nov 10 01:58:10.140: INFO: created owner resource "ownerqkvfb" Nov 10 01:58:10.269: INFO: created dependent resource "dependentbkkwg" Nov 10 01:58:10.529: INFO: created canary resource "canary77v9j" I1110 01:58:21.273251 14266 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 10 01:58:21.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 31 lines ... [It] should support cascading deletion of custom resources test/e2e/apimachinery/garbage_collector.go:905 Nov 10 01:58:07.426: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig Nov 10 01:58:10.140: INFO: created owner resource "ownerqkvfb" Nov 10 01:58:10.269: INFO: created dependent resource "dependentbkkwg" Nov 10 01:58:10.529: INFO: created canary resource "canary77v9j" I1110 01:58:21.273251 14266 request.go:1353] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Nov 10 01:58:21.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector ... skipping 1619 lines ... Nov 10 02:34:44.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 10 02:34:45.108: INFO: stderr: "" Nov 10 02:34:45.108: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 10 02:34:45.108: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 10 02:34:45.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-2278-webhook' Nov 10 02:34:45.707: INFO: stderr: "" Nov 10 02:34:45.707: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-2278-webhook\" deleted\n" Nov 10 02:34:45.707: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-2278-webhook" deleted error:%!s(<nil>) Nov 10 02:34:45.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig --namespace=gmsa-full-test-windows-2278 exec --namespace=gmsa-full-test-windows-2278 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 10 02:34:52.262: INFO: stderr: "" Nov 10 02:34:52.262: INFO: stdout: "namespace \"gmsa-full-test-windows-2278-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-2278-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 10 02:34:52.262: INFO: stdout:namespace "gmsa-full-test-windows-2278-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-2278-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 10 02:34:52.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 157 lines ... Nov 10 02:34:44.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 10 02:34:45.108: INFO: stderr: "" Nov 10 02:34:45.108: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 10 02:34:45.108: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 10 02:34:45.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-2278-webhook' Nov 10 02:34:45.707: INFO: stderr: "" Nov 10 02:34:45.707: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-2278-webhook\" deleted\n" Nov 10 02:34:45.707: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-2278-webhook" deleted error:%!s(<nil>) Nov 10 02:34:45.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig --namespace=gmsa-full-test-windows-2278 exec --namespace=gmsa-full-test-windows-2278 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 10 02:34:52.262: INFO: stderr: "" Nov 10 02:34:52.262: INFO: stdout: "namespace \"gmsa-full-test-windows-2278-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-2278-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 10 02:34:52.262: INFO: stdout:namespace "gmsa-full-test-windows-2278-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-2278-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-2278-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 10 02:34:52.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 368 lines ... Nov 10 02:52:33.040: INFO: waiting for 3 replicas (current: 2) Nov 10 02:52:47.034: INFO: RC test-deployment: sending request to consume 250 millicores Nov 10 02:52:47.034: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4577/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 02:52:53.040: INFO: waiting for 3 replicas (current: 2) Nov 10 02:53:13.037: INFO: waiting for 3 replicas (current: 2) Nov 10 02:53:13.139: INFO: waiting for 3 replicas (current: 2) Nov 10 02:53:13.139: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 02:53:13.139: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002791e68, {0x751dca3?, 0xc00398a120?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f2518, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 10 02:53:32.460: INFO: Latency metrics for node capz-conf-v5lj5 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-4577" for this suite. 11/10/22 02:53:32.461 ------------------------------ • [FAILED] [946.455 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:48 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:55 ... skipping 146 lines ... Nov 10 02:52:33.040: INFO: waiting for 3 replicas (current: 2) Nov 10 02:52:47.034: INFO: RC test-deployment: sending request to consume 250 millicores Nov 10 02:52:47.034: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4577/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 02:52:53.040: INFO: waiting for 3 replicas (current: 2) Nov 10 02:53:13.037: INFO: waiting for 3 replicas (current: 2) Nov 10 02:53:13.139: INFO: waiting for 3 replicas (current: 2) Nov 10 02:53:13.139: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 02:53:13.139: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002791e68, {0x751dca3?, 0xc00398a120?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f2518, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 826 lines ... Nov 10 03:14:55.330: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:09.365: INFO: RC rs: sending request to consume 250 millicores Nov 10 03:15:09.365: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6469/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 03:15:15.331: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:35.330: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:35.431: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:35.431: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 03:15:35.431: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002797e68, {0x74f15b8?, 0xc00449de60?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7505469, 0xa}}, 0xc000ab0d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74f15b8?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7505469, 0xa}}, {0x74f2518, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 119 lines ... Nov 10 03:15:52.957: INFO: Latency metrics for node capz-conf-v5lj5 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-6469" for this suite. 11/10/22 03:15:52.957 ------------------------------ • [FAILED] [944.604 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] ReplicaSet test/e2e/autoscaling/horizontal_pod_autoscaling.go:69 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 ... skipping 147 lines ... Nov 10 03:14:55.330: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:09.365: INFO: RC rs: sending request to consume 250 millicores Nov 10 03:15:09.365: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-6469/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 03:15:15.331: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:35.330: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:35.431: INFO: waiting for 3 replicas (current: 2) Nov 10 03:15:35.431: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 03:15:35.431: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002797e68, {0x74f15b8?, 0xc00449de60?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7505469, 0xa}}, 0xc000ab0d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74f15b8?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7505469, 0xa}}, {0x74f2518, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 2151 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-6171321a-5785-4a70-b965-a9a3c7360381 11/10/22 03:50:57.929 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 03:50:58.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 03:50:58.139829 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 227 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-6171321a-5785-4a70-b965-a9a3c7360381 11/10/22 03:50:57.929 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 03:50:58.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 03:50:58.139829 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 23 lines ... I1110 03:50:58.780472 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 03:50:58.985331 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/10/22 03:50:58.985 Nov 10 03:50:59.111: INFO: Waiting up to 2m0s for pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501" in namespace "var-expansion-8223" to be "running" Nov 10 03:50:59.215: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 103.427954ms Nov 10 03:51:01.319: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208070571s Nov 10 03:51:03.320: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208642477s Nov 10 03:51:05.318: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20710323s Nov 10 03:51:07.318: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207312816s ... skipping 106 lines ... I1110 03:50:58.780472 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 03:50:58.985331 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/10/22 03:50:58.985 Nov 10 03:50:59.111: INFO: Waiting up to 2m0s for pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501" in namespace "var-expansion-8223" to be "running" Nov 10 03:50:59.215: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 103.427954ms Nov 10 03:51:01.319: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208070571s Nov 10 03:51:03.320: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208642477s Nov 10 03:51:05.318: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20710323s Nov 10 03:51:07.318: INFO: Pod "var-expansion-283bc6d6-776e-4372-a2c7-e4b2599aa501": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207312816s ... skipping 2255 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-5737fc37-c153-4079-8e9d-3c67b7579e6d 11/10/22 04:37:11.881 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 04:37:11.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 04:37:12.137719 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 75 lines ... STEP: verifying the node doesn't have the label kubernetes.io/e2e-5737fc37-c153-4079-8e9d-3c67b7579e6d 11/10/22 04:37:11.881 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 04:37:11.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 04:37:12.137719 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 1041 lines ... Nov 10 05:00:49.264: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:02.326: INFO: RC test-deployment: sending request to consume 250 millicores Nov 10 05:01:02.327: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4769/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 05:01:09.265: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:29.265: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:29.367: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:29.367: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 05:01:29.367: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0022abe68, {0x751dca3?, 0xc0056e3560?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f2518, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 10 05:01:47.118: INFO: Latency metrics for node capz-conf-v5lj5 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-4769" for this suite. 11/10/22 05:01:47.118 ------------------------------ • [FAILED] [944.883 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:48 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 ... skipping 146 lines ... Nov 10 05:00:49.264: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:02.326: INFO: RC test-deployment: sending request to consume 250 millicores Nov 10 05:01:02.327: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-4769/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 05:01:09.265: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:29.265: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:29.367: INFO: waiting for 3 replicas (current: 2) Nov 10 05:01:29.367: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 05:01:29.367: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0022abe68, {0x751dca3?, 0xc0056e3560?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f2518, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 2520 lines ... STEP: verifying the node doesn't have the label node 11/10/22 05:14:31.745 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 05:14:31.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 05:14:31.961534 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 133 lines ... STEP: verifying the node doesn't have the label node 11/10/22 05:14:31.745 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 05:14:31.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 05:14:31.961534 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 154 lines ... Nov 10 05:29:18.966: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:31.976: INFO: RC test-deployment: sending request to consume 250 MB Nov 10 05:29:31.977: INFO: ConsumeMem URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3881/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 10 05:29:38.968: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:58.965: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:59.067: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:59.067: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 05:29:59.067: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002791e68, {0x751dca3?, 0xc0023aea80?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f7fdb, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 121 lines ... Nov 10 05:30:16.744: INFO: Latency metrics for node capz-conf-v5lj5 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 STEP: Destroying namespace "horizontal-pod-autoscaling-3881" for this suite. 11/10/22 05:30:16.744 ------------------------------ • [FAILED] [944.777 seconds] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/autoscaling/framework.go:23 [Serial] [Slow] Deployment (Pod Resource) test/e2e/autoscaling/horizontal_pod_autoscaling.go:153 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:157 ... skipping 148 lines ... Nov 10 05:29:18.966: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:31.976: INFO: RC test-deployment: sending request to consume 250 MB Nov 10 05:29:31.977: INFO: ConsumeMem URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-3881/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 10 05:29:38.968: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:58.965: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:59.067: INFO: waiting for 3 replicas (current: 2) Nov 10 05:29:59.067: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 05:29:59.067: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc002791e68, {0x751dca3?, 0xc0023aea80?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, 0xc000ab0e10) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x751dca3?, 0x620ff85?}, {{0x74f32c8, 0x4}, {0x74fc46b, 0x7}, {0x7504889, 0xa}}, {0x74f7fdb, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 ... skipping 709 lines ... STEP: Destroying namespace "sched-preemption-3682" for this suite. 11/10/22 05:35:25.014 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSS ------------------------------ [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/10/22 05:35:25.124 ... skipping 12 lines ... I1110 05:35:25.647328 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 05:35:25.852369 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Nov 10 05:35:26.398: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 10 05:35:26.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 6 lines ... ------------------------------ • [1.495 seconds] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:27 attempt to deploy past allocatable memory limits test/e2e/windows/memory_limits.go:59 should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 14 lines ... I1110 05:35:25.647328 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 05:35:25.852369 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/windows/memory_limits.go:48 [It] should fail deployments of pods once there isn't enough memory test/e2e/windows/memory_limits.go:60 Nov 10 05:35:26.398: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 10 05:35:26.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] ... skipping 329 lines ... I1110 05:37:16.168027 14266 reflector.go:227] Stopping reflector *v1.Event (0s) from test/e2e/scheduling/events.go:98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 05:37:16.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 05:37:16.385937 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 61 lines ... I1110 05:37:16.168027 14266 reflector.go:227] Stopping reflector *v1.Event (0s) from test/e2e/scheduling/events.go:98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Nov 10 05:37:16.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 I1110 05:37:16.385937 14266 request.go:914] Error in request: resource name may not be empty [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 ... skipping 182 lines ... STEP: Destroying namespace "memory-limit-test-windows-5476" for this suite. 11/10/22 05:42:19.021 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/10/22 05:42:19.128 Nov 10 05:42:19.128: INFO: >>> kubeConfig: /home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig I1110 05:42:19.129860 14266 discovery.go:214] Invalidating discovery information ... skipping 8 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/10/22 05:42:19.64 I1110 05:42:19.640341 14266 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 05:42:19.640389 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 05:42:19.842036 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Nov 10 05:42:19.948: INFO: Waiting up to 2m0s for pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" in namespace "var-expansion-5965" to be "container 0 failed with reason CreateContainerConfigError" Nov 10 05:42:20.050: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823": Phase="Pending", Reason="", readiness=false. Elapsed: 101.857107ms Nov 10 05:42:22.154: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206092905s Nov 10 05:42:24.153: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205388038s Nov 10 05:42:24.154: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 10 05:42:24.154: INFO: Deleting pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" in namespace "var-expansion-5965" Nov 10 05:42:24.264: INFO: Wait up to 5m0s for pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 10 05:42:26.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 4 lines ... tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-5965" for this suite. 11/10/22 05:42:26.582 ------------------------------ • [SLOW TEST] [7.560 seconds] [sig-node] Variable Expansion test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/10/22 05:42:19.128 ... skipping 10 lines ... STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/10/22 05:42:19.64 I1110 05:42:19.640341 14266 reflector.go:221] Starting reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 05:42:19.640389 14266 reflector.go:257] Listing and watching *v1.ConfigMap from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 I1110 05:42:19.842036 14266 reflector.go:227] Stopping reflector *v1.ConfigMap (0s) from vendor/k8s.io/client-go/tools/watch/informerwatcher.go:146 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] test/e2e/common/node/expansion.go:152 Nov 10 05:42:19.948: INFO: Waiting up to 2m0s for pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" in namespace "var-expansion-5965" to be "container 0 failed with reason CreateContainerConfigError" Nov 10 05:42:20.050: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823": Phase="Pending", Reason="", readiness=false. Elapsed: 101.857107ms Nov 10 05:42:22.154: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206092905s Nov 10 05:42:24.153: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205388038s Nov 10 05:42:24.154: INFO: Pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" satisfied condition "container 0 failed with reason CreateContainerConfigError" Nov 10 05:42:24.154: INFO: Deleting pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" in namespace "var-expansion-5965" Nov 10 05:42:24.264: INFO: Wait up to 5m0s for pod "var-expansion-6b3fe74c-d417-43bb-a6c7-217b50953823" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 10 05:42:26.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion ... skipping 712 lines ... Nov 10 05:51:15.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 10 05:51:15.722: INFO: stderr: "" Nov 10 05:51:15.722: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 10 05:51:15.722: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 10 05:51:15.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-5352-webhook' Nov 10 05:51:16.237: INFO: stderr: "" Nov 10 05:51:16.237: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-5352-webhook\" deleted\n" Nov 10 05:51:16.237: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-5352-webhook" deleted error:%!s(<nil>) Nov 10 05:51:16.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig --namespace=gmsa-full-test-windows-5352 exec --namespace=gmsa-full-test-windows-5352 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 10 05:51:22.617: INFO: stderr: "" Nov 10 05:51:22.617: INFO: stdout: "namespace \"gmsa-full-test-windows-5352-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-5352-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 10 05:51:22.617: INFO: stdout:namespace "gmsa-full-test-windows-5352-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-5352-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 10 05:51:22.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 141 lines ... Nov 10 05:51:15.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CustomResourceDefinition gmsacredentialspecs.windows.k8s.io' Nov 10 05:51:15.722: INFO: stderr: "" Nov 10 05:51:15.722: INFO: stdout: "customresourcedefinition.apiextensions.k8s.io \"gmsacredentialspecs.windows.k8s.io\" deleted\n" Nov 10 05:51:15.722: INFO: stdout:customresourcedefinition.apiextensions.k8s.io "gmsacredentialspecs.windows.k8s.io" deleted error:%!s(<nil>) Nov 10 05:51:15.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig delete CertificateSigningRequest gmsa-webhook.gmsa-full-test-windows-5352-webhook' Nov 10 05:51:16.237: INFO: stderr: "" Nov 10 05:51:16.237: INFO: stdout: "certificatesigningrequest.certificates.k8s.io \"gmsa-webhook.gmsa-full-test-windows-5352-webhook\" deleted\n" Nov 10 05:51:16.237: INFO: stdout:certificatesigningrequest.certificates.k8s.io "gmsa-webhook.gmsa-full-test-windows-5352-webhook" deleted error:%!s(<nil>) Nov 10 05:51:16.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/k8s.io/windows-testing/capz-conf-jylo7u.kubeconfig --namespace=gmsa-full-test-windows-5352 exec --namespace=gmsa-full-test-windows-5352 webhook-deployer -- kubectl delete -f /manifests.yml' Nov 10 05:51:22.617: INFO: stderr: "" Nov 10 05:51:22.617: INFO: stdout: "namespace \"gmsa-full-test-windows-5352-webhook\" deleted\nsecret \"gmsa-webhook\" deleted\nserviceaccount \"gmsa-webhook\" deleted\nclusterrole.rbac.authorization.k8s.io \"gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role\" deleted\nclusterrolebinding.rbac.authorization.k8s.io \"gmsa-full-test-windows-5352-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role\" deleted\ndeployment.apps \"gmsa-webhook\" deleted\nservice \"gmsa-webhook\" deleted\nvalidatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\nmutatingwebhookconfiguration.admissionregistration.k8s.io \"gmsa-webhook\" deleted\n" Nov 10 05:51:22.617: INFO: stdout:namespace "gmsa-full-test-windows-5352-webhook" deleted secret "gmsa-webhook" deleted serviceaccount "gmsa-webhook" deleted clusterrole.rbac.authorization.k8s.io "gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role" deleted clusterrolebinding.rbac.authorization.k8s.io "gmsa-full-test-windows-5352-webhook-gmsa-webhook-binding-to-gmsa-full-test-windows-5352-webhook-gmsa-webhook-rbac-role" deleted deployment.apps "gmsa-webhook" deleted service "gmsa-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted mutatingwebhookconfiguration.admissionregistration.k8s.io "gmsa-webhook" deleted error:%!s(<nil>) [AfterEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/node/init/init.go:32 Nov 10 05:51:22.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] ... skipping 51 lines ... Nov 10 05:51:49.395: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 05:51:49.395: INFO: RC rc: consume 0 MB in total Nov 10 05:51:49.396: INFO: RC rc: disabling mem consumption Nov 10 05:51:49.396: INFO: RC rc: consume custom metric 0 in total Nov 10 05:51:49.396: INFO: RC rc: disabling consumption of custom metric QPS Nov 10 05:51:49.622: INFO: waiting for 3 replicas (current: 1) {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 5h0m0s timeout","severity":"error","time":"2022-11-10T05:52:05Z"} ++ early_exit_handler ++ '[' -n 162 ']' ++ kill -TERM 162 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 143 lines ... Nov 10 05:57:20.661: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 05:57:29.725: INFO: waiting for 3 replicas (current: 2) I1110 05:57:43.528890 14266 reflector.go:559] test/e2e/node/taints.go:151: Watch close - *v1.Pod total 7 items received Nov 10 05:57:47.651: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-8xqtq to /logs/artifacts/clusters/capz-conf-jylo7u/machines/capz-conf-jylo7u-md-win-87677d9cb-x9dqb/crashdumps.tar Nov 10 05:57:47.890: INFO: Collecting boot logs for AzureMachine capz-conf-jylo7u-md-win-8xqtq Failed to get logs for machine capz-conf-jylo7u-md-win-87677d9cb-x9dqb, cluster default/capz-conf-jylo7u: dialing public load balancer at capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.108.36:54160->20.82.197.88:22: read: connection reset by peer Nov 10 05:57:49.725: INFO: waiting for 3 replicas (current: 2) Nov 10 05:57:49.973: INFO: Collecting logs for Windows node capz-conf-v5lj5 in cluster capz-conf-jylo7u in namespace default Nov 10 05:57:50.775: INFO: RC rc: sending request to consume 250 millicores Nov 10 05:57:50.775: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 05:58:09.724: INFO: waiting for 3 replicas (current: 2) ... skipping 17 lines ... Nov 10 06:00:49.732: INFO: waiting for 3 replicas (current: 2) Nov 10 06:00:51.477: INFO: RC rc: sending request to consume 250 millicores Nov 10 06:00:51.477: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 10 06:01:01.838: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-v5lj5 to /logs/artifacts/clusters/capz-conf-jylo7u/machines/capz-conf-jylo7u-md-win-87677d9cb-xj8kl/crashdumps.tar Nov 10 06:01:04.238: INFO: Collecting boot logs for AzureMachine capz-conf-jylo7u-md-win-v5lj5 Failed to get logs for machine capz-conf-jylo7u-md-win-87677d9cb-xj8kl, cluster default/capz-conf-jylo7u: getting a new sftp client connection: ssh: subsystem request failed [1mSTEP[0m: Dumping workload cluster default/capz-conf-jylo7u kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-56c5ff4bf8-sxrk6 [1mSTEP[0m: Fetching kube-system pod logs took 1.61347173s [1mSTEP[0m: Dumping workload cluster default/capz-conf-jylo7u Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-56c5ff4bf8-sxrk6, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-gdxvh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-nk6ch, container kube-proxy [1mSTEP[0m: failed to find events of Pod "calico-kube-controllers-56c5ff4bf8-sxrk6" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-capz-conf-jylo7u-control-plane-h282z [1mSTEP[0m: failed to find events of Pod "etcd-capz-conf-jylo7u-control-plane-h282z" [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-mntgn, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-ct68g, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-jylo7u-control-plane-h282z, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-mntgn [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-nk6ch [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-sp57t, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-conf-jylo7u-control-plane-h282z, container etcd [1mSTEP[0m: failed to find events of Pod "kube-proxy-nk6ch" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-jylo7u-control-plane-h282z, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-fvqkb, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-ct68g [1mSTEP[0m: failed to find events of Pod "calico-node-ct68g" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-fvqkb [1mSTEP[0m: failed to find events of Pod "kube-proxy-windows-fvqkb" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-z9ljf, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-s6kc9, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-s6kc9 [1mSTEP[0m: failed to find events of Pod "containerd-logger-s6kc9" [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-r85br [1mSTEP[0m: failed to find events of Pod "containerd-logger-r85br" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-fdff55fb9-d4nvd, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-jylo7u-control-plane-h282z, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/coredns-fdff55fb9-d4nvd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-z9ljf, container calico-node-felix [1mSTEP[0m: failed to find events of Pod "coredns-fdff55fb9-d4nvd" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-fdff55fb9-qz96x, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-z9ljf [1mSTEP[0m: failed to find events of Pod "calico-node-windows-z9ljf" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-zz6z4, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/coredns-fdff55fb9-qz96x [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-capz-conf-jylo7u-control-plane-h282z [1mSTEP[0m: failed to find events of Pod "coredns-fdff55fb9-qz96x" [1mSTEP[0m: failed to find events of Pod "kube-apiserver-capz-conf-jylo7u-control-plane-h282z" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-jylo7u-control-plane-h282z [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-zz6z4, container calico-node-felix [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-capz-conf-jylo7u-control-plane-h282z" [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-gdxvh, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/metrics-server-954b56d74-pb25d, container metrics-server [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-zz6z4 [1mSTEP[0m: failed to find events of Pod "calico-node-windows-zz6z4" [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-capz-conf-jylo7u-control-plane-h282z [1mSTEP[0m: Collecting events for Pod kube-system/metrics-server-954b56d74-pb25d [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-r85br, container containerd-logger [1mSTEP[0m: failed to find events of Pod "kube-scheduler-capz-conf-jylo7u-control-plane-h282z" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-sp57t [1mSTEP[0m: failed to find events of Pod "kube-proxy-windows-sp57t" [1mSTEP[0m: failed to find events of Pod "metrics-server-954b56d74-pb25d" Nov 10 06:01:09.735: INFO: waiting for 3 replicas (current: 2) [1mSTEP[0m: Fetching activity logs took 2.04584799s ++ popd /home/prow/go/src/k8s.io/windows-testing ++ /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/log/redact.sh ================ REDACTING LOGS ================ ... skipping 97 lines ... Nov 10 06:02:51.928: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } I1110 06:02:53.209914 14266 streamwatcher.go:114] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=5815, ErrCode=NO_ERROR, debug="" I1110 06:02:53.210016 14266 reflector.go:559] test/e2e/node/taints.go:151: Watch close - *v1.Pod total 5 items received I1110 06:02:53.209916 14266 streamwatcher.go:114] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=5815, ErrCode=NO_ERROR, debug="" I1110 06:02:53.210139 14266 reflector.go:559] test/e2e/node/taints.go:151: Watch close - *v1.Pod total 5 items received Nov 10 06:03:23.171: INFO: ConsumeCPU failure: Post "https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=250&requestSizeMillicores=100": dial tcp 20.82.197.88:6443: i/o timeout Nov 10 06:03:23.172: INFO: Unexpected error: <*url.Error | 0xc002ad7770>: { Op: "Get", URL: "https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-8680/replicationcontrollers/rc", Err: <*net.OpError | 0xc00349a280>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 10 06:03:23.172: FAIL: Get "https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-8680/replicationcontrollers/rc": dial tcp 20.82.197.88:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc002765680) test/e2e/framework/autoscaling/autoscaling_utils.go:428 +0x1bd k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1() test/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a ... skipping 14 lines ... k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00122be68, {0x74f15b2?, 0xc00477baa0?}, {{0x0, 0x0}, {0x74f15fc, 0x2}, {0x75410f5, 0x15}}, 0xc000ab0d20) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74f15b2?, 0x620ff85?}, {{0x0, 0x0}, {0x74f15fc, 0x2}, {0x75410f5, 0x15}}, {0x74f2518, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:81 +0x8b E1110 06:03:23.172981 14266 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/autoscaling/autoscaling_utils.go", LineNumber:428, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc002765680)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:428 +0x1bd\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1()\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:479 +0x2a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2707811, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7f0ae88?, 0xc000084098?}, 0x3?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7f0ae88, 0xc000084098}, 0xc002b45cc8, 0x2f94e6a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f0ae88, 0xc000084098}, 0xb0?, 0x2f93a05?, 0x10?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7f0ae88, 0xc000084098}, 0xc002dd0710?, 0xc0022a9c00?, 0x25ef967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x74f15b2?, 0x2?, 0x74f15fc?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc002765680, 0x3, 0x3?)\n\ttest/e2e/framework/autoscaling/autoscaling_utils.go:478 +0x7f\nk8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00122be68, {0x74f15b2?, 0xc00477baa0?}, {{0x0, 0x0}, {0x74f15fc, 0x2}, {0x75410f5, 0x15}}, 0xc000ab0d20)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8\nk8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x74f15b2?, 0x620ff85?}, {{0x0, 0x0}, {0x74f15fc, 0x2}, {0x75410f5, 0x15}}, {0x74f2518, 0x3}, ...)\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212\nk8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1()\n\ttest/e2e/autoscaling/horizontal_pod_autoscaling.go:81 +0x8b", CustomMessage:""}} ([1m[38;5;9mYour Test Panicked[0m [38;5;243mtest/e2e/framework/autoscaling/autoscaling_utils.go:428[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. ... skipping 15 lines ... k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x702d9c0?, 0xc000376e70}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000376e70?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x702d9c0, 0xc000376e70}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0001d0700, 0xd5}, {0xc00122b7c8?, 0x74f119a?, 0xc00122b7e8?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002058300, 0xc0}, {0xc00122b860?, 0xc002058300?, 0xc00122b888?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ed8bc0, 0xc002ad7770}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc002765680) ... skipping 46 lines ... I1110 06:05:28.220479 14266 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2318/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=43676&timeout=9m14s&timeoutSeconds=554&watch=true Nov 10 06:05:38.175: INFO: ConsumeCPU failure: Post "https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=250&requestSizeMillicores=100": dial tcp 20.82.197.88:6443: i/o timeout Nov 10 06:05:38.176: INFO: ConsumeCPU URL: {https capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } I1110 06:05:59.224724 14266 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2318/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=43676&timeout=9m14s&timeoutSeconds=554&watch=true I1110 06:05:59.224780 14266 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-8865/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=43613&timeout=5m47s&timeoutSeconds=347&watch=true Nov 10 06:06:08.179: INFO: ConsumeCPU failure: Post "https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-8680/services/rc-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=250&requestSizeMillicores=100": dial tcp 20.82.197.88:6443: i/o timeout Nov 10 06:06:08.179: INFO: Unexpected error: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 10 06:06:08.179: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).sendConsumeCPURequest(0xc002765680, 0xfa) test/e2e/framework/autoscaling/autoscaling_utils.go:368 +0x107 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).makeConsumeCPURequests(0xc002765680) test/e2e/framework/autoscaling/autoscaling_utils.go:282 +0x1f7 created by k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer test/e2e/framework/autoscaling/autoscaling_utils.go:238 +0xa3d STEP: deleting ReplicationController rc in namespace horizontal-pod-autoscaling-8680, will wait for the garbage collector to delete the pods 11/10/22 06:06:18.183 I1110 06:06:30.225947 14266 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-8865/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=43613&timeout=5m47s&timeoutSeconds=347&watch=true I1110 06:06:30.225950 14266 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2318/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=43676&timeout=9m14s&timeoutSeconds=554&watch=true Nov 10 06:06:48.184: INFO: Unexpected error: <*url.Error | 0xc00413be00>: { Op: "Get", URL: "https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-8680/replicationcontrollers/rc", Err: <*net.OpError | 0xc003c7d360>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 3 lines ... Zone: "", }, Err: {}, }, } Nov 10 06:06:48.184: FAIL: Get "https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/horizontal-pod-autoscaling-8680/replicationcontrollers/rc": dial tcp 20.82.197.88:6443: i/o timeout Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).CleanUp(0xc002765680) test/e2e/framework/autoscaling/autoscaling_utils.go:546 +0x2a5 panic({0x702d9c0, 0xc000376e70}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000376e70?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7 panic({0x702d9c0, 0xc000376e70}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002058300, 0xc0}, {0xc00122b860?, 0xc002058300?, 0xc00122b888?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ed8bc0, 0xc002ad7770}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc002765680) ... skipping 22 lines ... test/e2e/autoscaling/horizontal_pod_autoscaling.go:81 +0x8b [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 10 06:06:48.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1110 06:07:01.228160 14266 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/taint-single-pod-8865/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-4&resourceVersion=43613&timeout=5m47s&timeoutSeconds=347&watch=true I1110 06:07:01.228161 14266 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://capz-conf-jylo7u-2b6b739b.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/taint-multiple-pods-2318/pods?allowWatchBookmarks=true&labelSelector=group%3Dtaint-eviction-b&resourceVersion=43676&timeout=9m14s&timeoutSeconds=554&watch=true {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-11-10T06:07:05Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-11-10T06:07:05Z"}