Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 588 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-khvqd [1mSTEP[0m: Fetching kube-system pod logs took 480.638032ms [1mSTEP[0m: Dumping workload cluster mhc-remediation-jof1an/mhc-remediation-nv453m Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-nv453m-control-plane-k7sdh [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-khvqd, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-mvc7s [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-nv453m-control-plane-k7sdh" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-nv453m-control-plane-k7sdh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-nv453m-control-plane-k7sdh, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-nv453m-control-plane-k7sdh, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-nv453m-control-plane-k7sdh" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-nv453m-control-plane-k7sdh [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-xdh9q, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-nv453m-control-plane-k7sdh" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-48x4f, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-xdh9q [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-6k2mx, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-48x4f [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-6k2mx [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-8c57h, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-r8krp, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-r8krp [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-8c57h [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-nv453m-control-plane-k7sdh, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-nv453m-control-plane-k7sdh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-nv453m-control-plane-k7sdh, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-nv453m-control-plane-k7sdh" [1mSTEP[0m: Fetching activity logs took 1.769270549s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-jof1an" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-jof1an/mhc-remediation-nv453m [1mSTEP[0m: Deleting cluster mhc-remediation-nv453m INFO: Waiting for the Cluster mhc-remediation-jof1an/mhc-remediation-nv453m to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-nv453m to be deleted ... skipping 16 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Fri, 27 Jan 2023 17:12:24 UTC on Ginkgo node 6 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 27 17:12:24.495: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/27 17:12:24 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-ksw10x" using the "management" template (Kubernetes v1.23.16, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-ksw10x --infrastructure (default) --kubernetes-version v1.23.16 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 61 lines ... [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-ksw10x kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 381.970853ms [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-s6jgr [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-k5q2x [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-ksw10x-control-plane-fxbl9 [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-ksw10x Azure activity log [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-ksw10x-control-plane-fxbl9" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-ksw10x-control-plane-fxbl9, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-ksw10x-control-plane-fxbl9 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-ksw10x-control-plane-fxbl9" [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-99lk4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-kg5pb, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-569lm, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-ksw10x-control-plane-fxbl9 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-ksw10x-control-plane-fxbl9" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-ksw10x-control-plane-fxbl9, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-ksw10x-control-plane-fxbl9, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-569lm [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-m2cdg, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-ksw10x-control-plane-fxbl9 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-ksw10x-control-plane-fxbl9" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-ksw10x-control-plane-fxbl9, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-kg5pb [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-99lk4, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-m2cdg [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-s6jgr, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-9zwv5 ... skipping 2 lines ... [1mSTEP[0m: Fetching activity logs took 2.100812907s Jan 27 17:21:33.644: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 27 17:21:33.963: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-ksw10x INFO: Waiting for the Cluster self-hosted/self-hosted-ksw10x to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-ksw10x to be deleted INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-dbbcc9f86-wzxgc, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6d49765f-r7nvh, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6599479f7c-5cgdj, container manager: http2: client connection lost Jan 27 17:26:14.223: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 27 17:26:14.245: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Jan 27 17:26:34.856: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 199 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-ey1dhr" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-atxf87/node-drain-ey1dhr logs Jan 27 17:27:54.319: INFO: Collecting logs for Linux node node-drain-ey1dhr-control-plane-9sgzm in cluster node-drain-ey1dhr in namespace node-drain-atxf87 Jan 27 17:34:28.632: INFO: Collecting boot logs for AzureMachine node-drain-ey1dhr-control-plane-9sgzm Failed to get logs for machine node-drain-ey1dhr-control-plane-d59g5, cluster node-drain-atxf87/node-drain-ey1dhr: dialing public load balancer at node-drain-ey1dhr-db782a10.westus2.cloudapp.azure.com: dial tcp 20.99.191.92:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-atxf87/node-drain-ey1dhr kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 616.261137ms [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-2fqxb [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-node-drain-ey1dhr-control-plane-9sgzm [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-ey1dhr-control-plane-9sgzm, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-79j4j ... skipping 115 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-rq4vm, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-z2ljj [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-1n6nmi-control-plane-pmkhr, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-rq4vm [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-l5rww [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-1n6nmi-control-plane-pmkhr [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-1n6nmi-control-plane-pmkhr" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-dxwbr [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-1n6nmi-control-plane-pmkhr, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-s88jr, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-r75kh, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-s88jr [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-1n6nmi-control-plane-pmkhr [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-dxwbr, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-1n6nmi-control-plane-pmkhr" [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-dxwbr, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-rq4vm, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-s88jr, container kube-proxy: pods "machine-pool-1n6nmi-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-rq4vm, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-r75kh, container calico-node: pods "machine-pool-1n6nmi-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 1.211334901s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-f25xig" namespace [1mSTEP[0m: Deleting cluster machine-pool-f25xig/machine-pool-1n6nmi [1mSTEP[0m: Deleting cluster machine-pool-1n6nmi INFO: Waiting for the Cluster machine-pool-f25xig/machine-pool-1n6nmi to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-1n6nmi to be deleted ... skipping 72 lines ... Jan 27 17:29:39.606: INFO: Collecting logs for Windows node quick-sta-cgfsl in cluster quick-start-wby7tj in namespace quick-start-b0djxn Jan 27 17:32:19.263: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-cgfsl to /logs/artifacts/clusters/quick-start-wby7tj/machines/quick-start-wby7tj-md-win-577b7774bf-7zjl7/crashdumps.tar Jan 27 17:32:21.983: INFO: Collecting boot logs for AzureMachine quick-start-wby7tj-md-win-cgfsl Failed to get logs for machine quick-start-wby7tj-md-win-577b7774bf-7zjl7, cluster quick-start-b0djxn/quick-start-wby7tj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 27 17:32:22.907: INFO: Collecting logs for Windows node quick-sta-xnqgp in cluster quick-start-wby7tj in namespace quick-start-b0djxn Jan 27 17:34:59.976: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-xnqgp to /logs/artifacts/clusters/quick-start-wby7tj/machines/quick-start-wby7tj-md-win-577b7774bf-wscmt/crashdumps.tar Jan 27 17:35:02.370: INFO: Collecting boot logs for AzureMachine quick-start-wby7tj-md-win-xnqgp Failed to get logs for machine quick-start-wby7tj-md-win-577b7774bf-wscmt, cluster quick-start-b0djxn/quick-start-wby7tj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-b0djxn/quick-start-wby7tj kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-z2vjf, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-jvhhw [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-5gs8v, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-spp7s, container calico-node [1mSTEP[0m: Fetching kube-system pod logs took 697.984599ms [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-wby7tj-control-plane-cjrkw [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-wby7tj-control-plane-cjrkw [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-8psbd [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-swxpm [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-wby7tj-control-plane-cjrkw" [1mSTEP[0m: Dumping workload cluster quick-start-b0djxn/quick-start-wby7tj Azure activity log [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-wby7tj-control-plane-cjrkw" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-md49s, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-wby7tj-control-plane-cjrkw, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-wby7tj-control-plane-cjrkw, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-jkmjm, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-spp7s [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-md49s [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9ff6g, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9ff6g, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-wby7tj-control-plane-cjrkw [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-wby7tj-control-plane-cjrkw" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-62qzr, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-wby7tj-control-plane-cjrkw [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-wby7tj-control-plane-cjrkw" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-wby7tj-control-plane-cjrkw, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-jkmjm [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-z2vjf [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-9ff6g [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-q6kl9, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-q6kl9 ... skipping 99 lines ... Jan 27 17:32:12.153: INFO: Collecting logs for Windows node md-scale-gtbt2 in cluster md-scale-kjvsij in namespace md-scale-j2nsj3 Jan 27 17:34:46.194: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-gtbt2 to /logs/artifacts/clusters/md-scale-kjvsij/machines/md-scale-kjvsij-md-win-f4767c5dc-gtdj2/crashdumps.tar Jan 27 17:34:48.593: INFO: Collecting boot logs for AzureMachine md-scale-kjvsij-md-win-gtbt2 Failed to get logs for machine md-scale-kjvsij-md-win-f4767c5dc-gtdj2, cluster md-scale-j2nsj3/md-scale-kjvsij: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 27 17:34:49.593: INFO: Collecting logs for Windows node md-scale-d8qmv in cluster md-scale-kjvsij in namespace md-scale-j2nsj3 Jan 27 17:37:25.717: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-d8qmv to /logs/artifacts/clusters/md-scale-kjvsij/machines/md-scale-kjvsij-md-win-f4767c5dc-v72z4/crashdumps.tar Jan 27 17:37:28.180: INFO: Collecting boot logs for AzureMachine md-scale-kjvsij-md-win-d8qmv Failed to get logs for machine md-scale-kjvsij-md-win-f4767c5dc-v72z4, cluster md-scale-j2nsj3/md-scale-kjvsij: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-j2nsj3/md-scale-kjvsij kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 685.508553ms [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-4gkvl, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-vbxfj [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-phl7r [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-kjvsij-control-plane-8pqfd, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-kjvsij-control-plane-8pqfd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-bjv76, container coredns [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-kjvsij-control-plane-8pqfd" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-fmbw5, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-4gkvl [1mSTEP[0m: Dumping workload cluster md-scale-j2nsj3/md-scale-kjvsij Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-t926c [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-qbz9x, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-nrwlg ... skipping 10 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-pgbtk [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-qbz9x [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-h28zh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-kjvsij-control-plane-8pqfd, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-nrwlg, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-kjvsij-control-plane-8pqfd [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-kjvsij-control-plane-8pqfd" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-h28zh, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-vbxfj, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-srs8f [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-gfb5d, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-9gzgm, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-9gzgm ... skipping 22 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:171[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.9/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-27T21:03:05Z"} ++ early_exit_handler ++ '[' -n 154 ']' ++ kill -TERM 154 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-27T21:18:05Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-27T21:18:05Z"}