Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.5 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
... skipping 578 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-hjkjp [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-5wfq6 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2b9d6, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-t1ln8v-control-plane-0, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-5zbzq [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-t1ln8v-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-kcp-adoption-t1ln8v-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5zbzq, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-t1ln8v-control-plane-0, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-lf2tm, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-lf2tm [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-t1ln8v-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-kcp-adoption-t1ln8v-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-hjkjp, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-2b9d6 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-kcp-adoption-t1ln8v-control-plane-0 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-5wfq6, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-kcp-adoption-t1ln8v-control-plane-0, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-t1ln8v-control-plane-0, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-t1ln8v-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-kcp-adoption-t1ln8v-control-plane-0" [1mSTEP[0m: failed to find events of Pod "etcd-kcp-adoption-t1ln8v-control-plane-0" [1mSTEP[0m: Fetching activity logs took 1.693040462s [1mSTEP[0m: Dumping all the Cluster API resources in the "kcp-adoption-yrfqz2" namespace [1mSTEP[0m: Deleting cluster kcp-adoption-yrfqz2/kcp-adoption-t1ln8v [1mSTEP[0m: Deleting cluster kcp-adoption-t1ln8v INFO: Waiting for the Cluster kcp-adoption-yrfqz2/kcp-adoption-t1ln8v to be deleted [1mSTEP[0m: Waiting for cluster kcp-adoption-t1ln8v to be deleted ... skipping 16 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sat, 07 Jan 2023 17:04:45 UTC on Ginkgo node 8 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 7 17:04:45.248: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/07 17:04:45 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-gzif6t" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-gzif6t --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 167 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-w6d9z, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-vsfumg-control-plane-87dk5, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cf26d [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-8rcx9 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-vsfumg-control-plane-87dk5 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-wftz5, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-vsfumg-control-plane-87dk5" [1mSTEP[0m: Dumping workload cluster mhc-remediation-ay6jge/mhc-remediation-vsfumg Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-wftz5 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-vsfumg-control-plane-87dk5 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-99vxp, container calico-node [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-vsfumg-control-plane-87dk5" [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-w6d9z [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-vsfumg-control-plane-87dk5, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-99vxp [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-qbpxg, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-qbpxg [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-vsfumg-control-plane-87dk5 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-vsfumg-control-plane-87dk5, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-vvtvk [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-vsfumg-control-plane-87dk5, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-vvtvk, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-8rcx9, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-vsfumg-control-plane-87dk5 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-vsfumg-control-plane-87dk5" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cf26d, container kube-proxy [1mSTEP[0m: Fetching activity logs took 1.840465516s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-ay6jge" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-ay6jge/mhc-remediation-vsfumg [1mSTEP[0m: Deleting cluster mhc-remediation-vsfumg INFO: Waiting for the Cluster mhc-remediation-ay6jge/mhc-remediation-vsfumg to be deleted ... skipping 237 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-node-drain-8bj900-control-plane-ldszs [1mSTEP[0m: Collecting events for Pod kube-system/etcd-node-drain-8bj900-control-plane-ldszs [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-zp749 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-qxwmg [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-lt4cs [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-node-drain-8bj900-control-plane-l7tz8 [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-8bj900-control-plane-l7tz8, container kube-apiserver: pods "node-drain-8bj900-control-plane-l7tz8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-8bj900-control-plane-l7tz8, container kube-controller-manager: pods "node-drain-8bj900-control-plane-l7tz8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-m6gcz, container calico-node: pods "node-drain-8bj900-control-plane-l7tz8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/etcd-node-drain-8bj900-control-plane-l7tz8, container etcd: pods "node-drain-8bj900-control-plane-l7tz8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-8bj900-control-plane-l7tz8, container kube-scheduler: pods "node-drain-8bj900-control-plane-l7tz8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-zp749, container kube-proxy: pods "node-drain-8bj900-control-plane-l7tz8" not found [1mSTEP[0m: Fetching activity logs took 2.32501411s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-t9hyiy" namespace [1mSTEP[0m: Deleting cluster node-drain-t9hyiy/node-drain-8bj900 [1mSTEP[0m: Deleting cluster node-drain-8bj900 INFO: Waiting for the Cluster node-drain-t9hyiy/node-drain-8bj900 to be deleted [1mSTEP[0m: Waiting for cluster node-drain-8bj900 to be deleted ... skipping 80 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-s7zqz, container kube-proxy [1mSTEP[0m: Fetching kube-system pod logs took 408.72776ms [1mSTEP[0m: Dumping workload cluster machine-pool-n32b9k/machine-pool-3t7lje Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-3t7lje-control-plane-8r7cl [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-3t7lje-control-plane-8r7cl [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-pblv2, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-3t7lje-control-plane-8r7cl" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-wcfxr, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-3t7lje-control-plane-8r7cl" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-3t7lje-control-plane-8r7cl [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-bm8rc, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-wcfxr [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-pcg4l [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-s7zqz [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-3t7lje-control-plane-8r7cl, container kube-controller-manager ... skipping 5 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-gvhxc [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-3t7lje-control-plane-8r7cl, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9r25f, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-rv29n [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-9r25f [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-3t7lje-control-plane-8r7cl [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-3t7lje-control-plane-8r7cl" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-pblv2 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-vjgh9, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-3t7lje-control-plane-8r7cl, container kube-apiserver [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-bm8rc, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-s7zqz, container kube-proxy: pods "kube-proxy-s7zqz" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-pblv2, container calico-node: pods "calico-node-pblv2" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-pcg4l, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-bm8rc, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Fetching activity logs took 1.782335675s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-n32b9k" namespace [1mSTEP[0m: Deleting cluster machine-pool-n32b9k/machine-pool-3t7lje [1mSTEP[0m: Deleting cluster machine-pool-3t7lje INFO: Waiting for the Cluster machine-pool-n32b9k/machine-pool-3t7lje to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-3t7lje to be deleted ... skipping 72 lines ... Jan 7 17:13:15.128: INFO: Collecting logs for Windows node quick-sta-hmg97 in cluster quick-start-ry5gq6 in namespace quick-start-m06ubp Jan 7 17:15:56.902: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-hmg97 to /logs/artifacts/clusters/quick-start-ry5gq6/machines/quick-start-ry5gq6-md-win-8457b57f6c-5wznx/crashdumps.tar Jan 7 17:15:58.704: INFO: Collecting boot logs for AzureMachine quick-start-ry5gq6-md-win-hmg97 Failed to get logs for machine quick-start-ry5gq6-md-win-8457b57f6c-5wznx, cluster quick-start-m06ubp/quick-start-ry5gq6: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 7 17:15:59.653: INFO: Collecting logs for Windows node quick-sta-dxhbc in cluster quick-start-ry5gq6 in namespace quick-start-m06ubp Jan 7 17:18:35.785: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-dxhbc to /logs/artifacts/clusters/quick-start-ry5gq6/machines/quick-start-ry5gq6-md-win-8457b57f6c-xjvzb/crashdumps.tar Jan 7 17:18:37.573: INFO: Collecting boot logs for AzureMachine quick-start-ry5gq6-md-win-dxhbc Failed to get logs for machine quick-start-ry5gq6-md-win-8457b57f6c-xjvzb, cluster quick-start-m06ubp/quick-start-ry5gq6: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-m06ubp/quick-start-ry5gq6 kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-c2nhv [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5xrtq, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9rb2s, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-2fgxt [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-ry5gq6-control-plane-9fplt, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-6q2x9 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-2fgxt, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-5xrtq [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-6q2x9, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-ry5gq6-control-plane-9fplt [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-ry5gq6-control-plane-9fplt" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-ry5gq6-control-plane-9fplt, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-ry5gq6-control-plane-9fplt [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-c2nhv, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-ry5gq6-control-plane-9fplt" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-ry5gq6-control-plane-9fplt, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-bf2ld [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-2fgxt, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-ry5gq6-control-plane-9fplt [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-ry5gq6-control-plane-9fplt" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kp44n, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-wn6mv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9rb2s, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-ry5gq6-control-plane-9fplt [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-mpk7h, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-wn6mv ... skipping 13 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-qp5nt [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-qhxqg, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-hd2vl, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-pl6mm [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-d2ckc, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-qhxqg [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-ry5gq6-control-plane-9fplt" [1mSTEP[0m: Fetching activity logs took 1.963178134s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-m06ubp" namespace [1mSTEP[0m: Deleting cluster quick-start-m06ubp/quick-start-ry5gq6 [1mSTEP[0m: Deleting cluster quick-start-ry5gq6 INFO: Waiting for the Cluster quick-start-m06ubp/quick-start-ry5gq6 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-ry5gq6 to be deleted ... skipping 78 lines ... Jan 7 17:15:38.480: INFO: Collecting logs for Windows node md-scale-f55zw in cluster md-scale-kagl7r in namespace md-scale-uqg72n Jan 7 17:18:16.489: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-f55zw to /logs/artifacts/clusters/md-scale-kagl7r/machines/md-scale-kagl7r-md-win-95c785855-52ccm/crashdumps.tar Jan 7 17:18:18.022: INFO: Collecting boot logs for AzureMachine md-scale-kagl7r-md-win-f55zw Failed to get logs for machine md-scale-kagl7r-md-win-95c785855-52ccm, cluster md-scale-uqg72n/md-scale-kagl7r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 7 17:18:18.943: INFO: Collecting logs for Windows node md-scale-z8phb in cluster md-scale-kagl7r in namespace md-scale-uqg72n Jan 7 17:20:48.738: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-z8phb to /logs/artifacts/clusters/md-scale-kagl7r/machines/md-scale-kagl7r-md-win-95c785855-zctt9/crashdumps.tar Jan 7 17:20:50.561: INFO: Collecting boot logs for AzureMachine md-scale-kagl7r-md-win-z8phb Failed to get logs for machine md-scale-kagl7r-md-win-95c785855-zctt9, cluster md-scale-uqg72n/md-scale-kagl7r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-uqg72n/md-scale-kagl7r kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-b7pgz, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-b7pgz [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-7hrwg [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-kd84f [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-kagl7r-control-plane-sz9k2 ... skipping 52 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:183[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-07T20:56:28Z"} ++ early_exit_handler ++ '[' -n 160 ']' ++ kill -TERM 160 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-07T21:11:28Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-07T21:11:28Z"}