Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 600 lines ... [1mSTEP[0m: Dumping workload cluster mhc-remediation-sv0xnj/mhc-remediation-7b5uah kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 488.597201ms [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-cfhjj, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-njqxw, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-7b5uah-control-plane-9lgc8 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-fqgxw, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-7b5uah-control-plane-9lgc8" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-fqgxw [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-c65bc, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-447lc [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-njqxw [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-g6mh5, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-c65bc [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-7b5uah-control-plane-9lgc8, container etcd [1mSTEP[0m: Dumping workload cluster mhc-remediation-sv0xnj/mhc-remediation-7b5uah Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-7b5uah-control-plane-9lgc8 [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-7b5uah-control-plane-9lgc8" [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-h2l45 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-7b5uah-control-plane-9lgc8, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-h2l45, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-7b5uah-control-plane-9lgc8, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-7b5uah-control-plane-9lgc8 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-7b5uah-control-plane-9lgc8" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-7b5uah-control-plane-9lgc8, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-g6mh5 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-447lc, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-7b5uah-control-plane-9lgc8 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-7b5uah-control-plane-9lgc8" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-cfhjj [1mSTEP[0m: Fetching activity logs took 1.172357547s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-sv0xnj" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-sv0xnj/mhc-remediation-7b5uah [1mSTEP[0m: Deleting cluster mhc-remediation-7b5uah INFO: Waiting for the Cluster mhc-remediation-sv0xnj/mhc-remediation-7b5uah to be deleted ... skipping 17 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sun, 29 Jan 2023 17:13:49 UTC on Ginkgo node 8 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 29 17:13:49.866: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/29 17:13:49 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-rvepgi" using the "management" template (Kubernetes v1.23.16, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-rvepgi --infrastructure (default) --kubernetes-version v1.23.16 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 70 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-dj7w8 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-sw997, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-sw997 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-rvepgi-control-plane-vbzc4, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-9jpzw, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-rvepgi-control-plane-vbzc4 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-rvepgi-control-plane-vbzc4" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-9jpzw [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-zp8tr, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-ddcj6 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-rvepgi-control-plane-vbzc4, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-zp8tr [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-rvepgi-control-plane-vbzc4 [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-rvepgi-control-plane-vbzc4" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-wgnmx, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-rvepgi-control-plane-vbzc4 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-rvepgi-control-plane-vbzc4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-rvepgi-control-plane-vbzc4, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-rvepgi-control-plane-vbzc4" [1mSTEP[0m: Fetching activity logs took 1.895130703s Jan 29 17:23:12.109: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 29 17:23:12.498: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-rvepgi INFO: Waiting for the Cluster self-hosted/self-hosted-rvepgi to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-rvepgi to be deleted ... skipping 232 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-ktrjr, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-gd6c2 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-rd8m2, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-rdhd7, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-mbkzt, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-z4o9lc-control-plane-vzwzw [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-z4o9lc-control-plane-vzwzw" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-rdhd7 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-n9prc, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-48r8w, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-s7v79, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-ktrjr [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-rd8m2, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-n9prc, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-rd8m2 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-mbkzt [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-pp7qg, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-rd7rp [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-z4o9lc-control-plane-vzwzw, container kube-scheduler [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-gd6c2, container calico-node: pods "machine-pool-z4o9lc-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-rd8m2, container calico-node-startup: pods "win-p-win000000" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-ktrjr, container kube-proxy: pods "win-p-win000000" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-8rz9f, container kube-proxy: pods "machine-pool-z4o9lc-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-rd8m2, container calico-node-felix: pods "win-p-win000000" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-n9prc, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-n9prc, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-pp7qg, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Fetching activity logs took 2.061125121s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-fuas70" namespace [1mSTEP[0m: Deleting cluster machine-pool-fuas70/machine-pool-z4o9lc [1mSTEP[0m: Deleting cluster machine-pool-z4o9lc INFO: Waiting for the Cluster machine-pool-fuas70/machine-pool-z4o9lc to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-z4o9lc to be deleted ... skipping 72 lines ... Jan 29 17:30:44.884: INFO: Collecting logs for Windows node quick-sta-mwzjt in cluster quick-start-jxe9ip in namespace quick-start-mydxev Jan 29 17:33:22.120: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-mwzjt to /logs/artifacts/clusters/quick-start-jxe9ip/machines/quick-start-jxe9ip-md-win-76b6945f59-kxwwp/crashdumps.tar Jan 29 17:33:24.209: INFO: Collecting boot logs for AzureMachine quick-start-jxe9ip-md-win-mwzjt Failed to get logs for machine quick-start-jxe9ip-md-win-76b6945f59-kxwwp, cluster quick-start-mydxev/quick-start-jxe9ip: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 29 17:33:25.329: INFO: Collecting logs for Windows node quick-sta-jh65f in cluster quick-start-jxe9ip in namespace quick-start-mydxev Jan 29 17:35:53.466: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-jh65f to /logs/artifacts/clusters/quick-start-jxe9ip/machines/quick-start-jxe9ip-md-win-76b6945f59-ls9k5/crashdumps.tar Jan 29 17:35:55.885: INFO: Collecting boot logs for AzureMachine quick-start-jxe9ip-md-win-jh65f Failed to get logs for machine quick-start-jxe9ip-md-win-76b6945f59-ls9k5, cluster quick-start-mydxev/quick-start-jxe9ip: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-mydxev/quick-start-jxe9ip kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-62q6p [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-jxe9ip-control-plane-2snb2 [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-ncvd9, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-ptxj6, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-ptxj6 ... skipping 20 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-62q6p, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-nptj4, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-fz4mx [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-fz4mx, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-jxe9ip-control-plane-2snb2 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-zmjvm [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-jxe9ip-control-plane-2snb2" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-jxe9ip-control-plane-2snb2, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-nptj4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-62q6p, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-zmjvm, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-jxe9ip-control-plane-2snb2, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-rvs9q [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-jxe9ip-control-plane-2snb2 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-jxe9ip-control-plane-2snb2 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-jxe9ip-control-plane-2snb2, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-jxe9ip-control-plane-2snb2" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-fc7kg, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-fc7kg [1mSTEP[0m: Fetching activity logs took 4.047218158s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-mydxev" namespace [1mSTEP[0m: Deleting cluster quick-start-mydxev/quick-start-jxe9ip [1mSTEP[0m: Deleting cluster quick-start-jxe9ip ... skipping 69 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-tqpkyp" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-yeysah/node-drain-tqpkyp logs Jan 29 17:31:40.158: INFO: Collecting logs for Linux node node-drain-tqpkyp-control-plane-snzsk in cluster node-drain-tqpkyp in namespace node-drain-yeysah Jan 29 17:38:14.946: INFO: Collecting boot logs for AzureMachine node-drain-tqpkyp-control-plane-snzsk Failed to get logs for machine node-drain-tqpkyp-control-plane-p552x, cluster node-drain-yeysah/node-drain-tqpkyp: dialing public load balancer at node-drain-tqpkyp-cfbffb11.westus3.cloudapp.azure.com: dial tcp 20.118.180.93:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-yeysah/node-drain-tqpkyp kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 661.509922ms [1mSTEP[0m: Dumping workload cluster node-drain-yeysah/node-drain-tqpkyp Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-tqpkyp-control-plane-snzsk, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-node-drain-tqpkyp-control-plane-snzsk [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-node-drain-tqpkyp-control-plane-snzsk ... skipping 99 lines ... Jan 29 17:37:23.548: INFO: Collecting logs for Windows node md-scale-bpxlc in cluster md-scale-oo7lg6 in namespace md-scale-23c7nt Jan 29 17:39:58.916: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-bpxlc to /logs/artifacts/clusters/md-scale-oo7lg6/machines/md-scale-oo7lg6-md-win-6686c6dffb-nm6m2/crashdumps.tar Jan 29 17:40:01.319: INFO: Collecting boot logs for AzureMachine md-scale-oo7lg6-md-win-bpxlc Failed to get logs for machine md-scale-oo7lg6-md-win-6686c6dffb-nm6m2, cluster md-scale-23c7nt/md-scale-oo7lg6: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 29 17:40:02.413: INFO: Collecting logs for Windows node md-scale-w49km in cluster md-scale-oo7lg6 in namespace md-scale-23c7nt Jan 29 17:42:35.979: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-w49km to /logs/artifacts/clusters/md-scale-oo7lg6/machines/md-scale-oo7lg6-md-win-6686c6dffb-sl75j/crashdumps.tar Jan 29 17:42:38.060: INFO: Collecting boot logs for AzureMachine md-scale-oo7lg6-md-win-w49km Failed to get logs for machine md-scale-oo7lg6-md-win-6686c6dffb-sl75j, cluster md-scale-23c7nt/md-scale-oo7lg6: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-23c7nt/md-scale-oo7lg6 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 669.816721ms [1mSTEP[0m: Dumping workload cluster md-scale-23c7nt/md-scale-oo7lg6 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-rbd5t [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9nbwt, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-wlckq, container containerd-logger ... skipping 2 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-r6kkl, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-n6qq7, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-oo7lg6-control-plane-zb5dg [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-pkxcf [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-nwt6n, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9nbwt, container calico-node-felix [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-oo7lg6-control-plane-zb5dg" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-n6qq7 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-oo7lg6-control-plane-zb5dg, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-4x4cp [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-zs6ns, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-b5f76, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-95j58, container kube-proxy ... skipping 11 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-oo7lg6-control-plane-zb5dg [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-ghb8x [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-pkxcf, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-zs6ns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-pkxcf, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-oo7lg6-control-plane-zb5dg, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-oo7lg6-control-plane-zb5dg" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-nwt6n [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-95j58 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-r6kkl [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-oo7lg6-control-plane-zb5dg, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-oo7lg6-control-plane-zb5dg [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-rbd5t, container coredns ... skipping 15 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:171[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.9/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-29T21:03:42Z"} ++ early_exit_handler ++ '[' -n 155 ']' ++ kill -TERM 155 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-29T21:18:43Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-29T21:18:43Z"}