Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
... skipping 586 lines ... [1mSTEP[0m: Dumping workload cluster mhc-remediation-eu5ty2/mhc-remediation-lmyxig kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 860.229054ms [1mSTEP[0m: Dumping workload cluster mhc-remediation-eu5ty2/mhc-remediation-lmyxig Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-lmyxig-control-plane-tdwpv, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-lmyxig-control-plane-tdwpv [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-lmyxig-control-plane-tdwpv" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-lmyxig-control-plane-tdwpv, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-h526s, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-dn2dm, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-lmyxig-control-plane-tdwpv [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-lmyxig-control-plane-tdwpv" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-lmyxig-control-plane-tdwpv, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-dn2dm [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2t54h, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-h526s [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-s9ghr, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-lmyxig-control-plane-tdwpv, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-2t54h [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-7l9wd, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-lmyxig-control-plane-tdwpv [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-lmyxig-control-plane-tdwpv" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-5bc66 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-7l9wd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-5bc66, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-w7vxc [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-w7vxc, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-lmyxig-control-plane-tdwpv [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-lmyxig-control-plane-tdwpv" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-s9ghr [1mSTEP[0m: Fetching activity logs took 1.467830805s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-eu5ty2" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-eu5ty2/mhc-remediation-lmyxig [1mSTEP[0m: Deleting cluster mhc-remediation-lmyxig INFO: Waiting for the Cluster mhc-remediation-eu5ty2/mhc-remediation-lmyxig to be deleted ... skipping 17 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sat, 31 Dec 2022 21:12:40 UTC on Ginkgo node 5 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Dec 31 21:12:40.025: INFO: starting to create namespace for hosting the "self-hosted" test spec 2022/12/31 21:12:40 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-nc7mvy" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-nc7mvy --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 61 lines ... [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-nc7mvy kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-l2ctf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-nc7mvy-control-plane-rx6gs, container kube-controller-manager [1mSTEP[0m: Fetching kube-system pod logs took 722.278137ms [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-nc7mvy Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-nc7mvy-control-plane-rx6gs [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-nc7mvy-control-plane-rx6gs" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5vk7w, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-l2ctf, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-5vk7w [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-nc7mvy-control-plane-rx6gs [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-r8z4s, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-nc7mvy-control-plane-rx6gs" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-r8z4s [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-nc7mvy-control-plane-rx6gs, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-nc7mvy-control-plane-rx6gs, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-dxvvv, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-fwwsv [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-nc7mvy-control-plane-rx6gs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-dxvvv [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-nc7mvy-control-plane-rx6gs" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-2vm4v, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-2vm4v [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-trvs9, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-fwwsv, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-trvs9 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-nc7mvy-control-plane-rx6gs, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-nc7mvy-control-plane-rx6gs [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-nc7mvy-control-plane-rx6gs" [1mSTEP[0m: Fetching activity logs took 1.456630947s Dec 31 21:27:29.058: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Dec 31 21:27:29.430: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-nc7mvy INFO: Waiting for the Cluster self-hosted/self-hosted-nc7mvy to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-nc7mvy to be deleted INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-6cf878cbc6-xj8zv, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-5b6d47468d-bpflt, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-66968bb4c5-jzc2f, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8f6f78b8b-wcjj7, container manager: http2: client connection lost Dec 31 21:32:09.624: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Dec 31 21:32:09.643: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Dec 31 21:32:58.808: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 97 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-uggae7-control-plane-p97kt [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-69j4c [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-uggae7-control-plane-p97kt, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-k5tdq [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-uggae7-control-plane-p97kt, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-8th9d, container kube-proxy [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-d9qxx, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-mht8v, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-mht8v, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-69j4c, container kube-proxy: pods "machine-pool-uggae7-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-jkj62, container calico-node: pods "machine-pool-uggae7-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 1.846404273s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-uelvti" namespace [1mSTEP[0m: Deleting cluster machine-pool-uelvti/machine-pool-uggae7 [1mSTEP[0m: Deleting cluster machine-pool-uggae7 INFO: Waiting for the Cluster machine-pool-uelvti/machine-pool-uggae7 to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-uggae7 to be deleted ... skipping 72 lines ... Dec 31 21:22:03.535: INFO: Collecting logs for Windows node quick-sta-7rb72 in cluster quick-start-85o614 in namespace quick-start-d7bvei Dec 31 21:24:37.396: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-7rb72 to /logs/artifacts/clusters/quick-start-85o614/machines/quick-start-85o614-md-win-6c9d7b8b96-22b27/crashdumps.tar Dec 31 21:24:40.904: INFO: Collecting boot logs for AzureMachine quick-start-85o614-md-win-7rb72 Failed to get logs for machine quick-start-85o614-md-win-6c9d7b8b96-22b27, cluster quick-start-d7bvei/quick-start-85o614: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 31 21:24:42.154: INFO: Collecting logs for Windows node quick-sta-b8kds in cluster quick-start-85o614 in namespace quick-start-d7bvei Dec 31 21:27:19.461: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-b8kds to /logs/artifacts/clusters/quick-start-85o614/machines/quick-start-85o614-md-win-6c9d7b8b96-thsbb/crashdumps.tar Dec 31 21:27:23.057: INFO: Collecting boot logs for AzureMachine quick-start-85o614-md-win-b8kds Failed to get logs for machine quick-start-85o614-md-win-6c9d7b8b96-thsbb, cluster quick-start-d7bvei/quick-start-85o614: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-d7bvei/quick-start-85o614 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.187792286s [1mSTEP[0m: Dumping workload cluster quick-start-d7bvei/quick-start-85o614 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-4h7bv [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-cb74m, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-pmpqv, container calico-kube-controllers ... skipping 31 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-85o614-control-plane-wzjpd, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-gpg5j [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-wfrcb, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-85o614-control-plane-wzjpd [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-85o614-control-plane-wzjpd [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-wfrcb [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-85o614-control-plane-wzjpd" [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-85o614-control-plane-wzjpd" [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-85o614-control-plane-wzjpd" [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-85o614-control-plane-wzjpd" [1mSTEP[0m: Fetching activity logs took 3.948431818s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-d7bvei" namespace [1mSTEP[0m: Deleting cluster quick-start-d7bvei/quick-start-85o614 [1mSTEP[0m: Deleting cluster quick-start-85o614 INFO: Waiting for the Cluster quick-start-d7bvei/quick-start-85o614 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-85o614 to be deleted ... skipping 236 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-r8bmhu-control-plane-dpnt4, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-node-drain-r8bmhu-control-plane-dpnt4, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9jc6t, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-r8bmhu-control-plane-84msj, container kube-apiserver [1mSTEP[0m: Dumping workload cluster node-drain-izqk7b/node-drain-r8bmhu Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-node-drain-r8bmhu-control-plane-84msj [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-r8bmhu-control-plane-84msj, container kube-apiserver: pods "node-drain-r8bmhu-control-plane-84msj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-8zs6s, container kube-proxy: pods "node-drain-r8bmhu-control-plane-84msj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-r8bmhu-control-plane-84msj, container kube-controller-manager: pods "node-drain-r8bmhu-control-plane-84msj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/etcd-node-drain-r8bmhu-control-plane-84msj, container etcd: pods "node-drain-r8bmhu-control-plane-84msj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-r8bmhu-control-plane-84msj, container kube-scheduler: pods "node-drain-r8bmhu-control-plane-84msj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-b4xf8, container calico-node: pods "node-drain-r8bmhu-control-plane-84msj" not found [1mSTEP[0m: Fetching activity logs took 3.471113407s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-izqk7b" namespace [1mSTEP[0m: Deleting cluster node-drain-izqk7b/node-drain-r8bmhu [1mSTEP[0m: Deleting cluster node-drain-r8bmhu INFO: Waiting for the Cluster node-drain-izqk7b/node-drain-r8bmhu to be deleted [1mSTEP[0m: Waiting for cluster node-drain-r8bmhu to be deleted ... skipping 78 lines ... Dec 31 21:29:06.641: INFO: Collecting logs for Windows node md-scale-twh7j in cluster md-scale-gh10pe in namespace md-scale-mz12li Dec 31 21:31:48.615: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-twh7j to /logs/artifacts/clusters/md-scale-gh10pe/machines/md-scale-gh10pe-md-win-558b7bfcfd-2xr4k/crashdumps.tar Dec 31 21:31:52.064: INFO: Collecting boot logs for AzureMachine md-scale-gh10pe-md-win-twh7j Failed to get logs for machine md-scale-gh10pe-md-win-558b7bfcfd-2xr4k, cluster md-scale-mz12li/md-scale-gh10pe: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 31 21:31:53.698: INFO: Collecting logs for Windows node md-scale-kpvw2 in cluster md-scale-gh10pe in namespace md-scale-mz12li Dec 31 21:34:31.458: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-kpvw2 to /logs/artifacts/clusters/md-scale-gh10pe/machines/md-scale-gh10pe-md-win-558b7bfcfd-8dsln/crashdumps.tar Dec 31 21:34:34.947: INFO: Collecting boot logs for AzureMachine md-scale-gh10pe-md-win-kpvw2 Failed to get logs for machine md-scale-gh10pe-md-win-558b7bfcfd-8dsln, cluster md-scale-mz12li/md-scale-gh10pe: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-mz12li/md-scale-gh10pe kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.16247602s [1mSTEP[0m: Dumping workload cluster md-scale-mz12li/md-scale-gh10pe Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-vtld9 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-zj8dr, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-kk9f8, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-gh10pe-control-plane-zw4qp [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-zj8dr, container calico-node-felix [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-gh10pe-control-plane-zw4qp" [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-7ptts, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-qhhbd, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-5xq2r, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-zj8dr [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-kk9f8 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-7ns7f, container calico-node-felix ... skipping 23 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-8mkmx [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-gh10pe-control-plane-zw4qp, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-7tw4h [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-gh10pe-control-plane-zw4qp [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-8mkmx, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-gh10pe-control-plane-zw4qp, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-gh10pe-control-plane-zw4qp" [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-gh10pe-control-plane-zw4qp" [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-gh10pe-control-plane-zw4qp" [1mSTEP[0m: Fetching activity logs took 5.256220898s [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-mz12li" namespace [1mSTEP[0m: Deleting cluster md-scale-mz12li/md-scale-gh10pe [1mSTEP[0m: Deleting cluster md-scale-gh10pe INFO: Waiting for the Cluster md-scale-mz12li/md-scale-gh10pe to be deleted [1mSTEP[0m: Waiting for cluster md-scale-gh10pe to be deleted ... skipping 9 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:171[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.6/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-01T01:02:49Z"} ++ early_exit_handler ++ '[' -n 165 ']' ++ kill -TERM 165 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-01T01:17:49Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-01T01:17:49Z"}