Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 602 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-a0b5ms-control-plane-htfrc [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-a0b5ms-control-plane-htfrc, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-gh89h, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-a0b5ms-control-plane-htfrc, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-nqd24, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-a0b5ms-control-plane-htfrc [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-a0b5ms-control-plane-htfrc" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-a0b5ms-control-plane-htfrc [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-2fxcm [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-a0b5ms-control-plane-htfrc, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5h8mg, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-nqd24 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-9vdgx ... skipping 22 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Wed, 18 Jan 2023 17:10:26 UTC on Ginkgo node 9 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 18 17:10:26.962: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/18 17:10:26 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-v49arj" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-v49arj --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 68 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-v49arj-control-plane-hhdmf [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-p5tjv [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-qfmhf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-qfmhf, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-jkfw9 [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-v49arj-control-plane-hhdmf [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-v49arj-control-plane-hhdmf" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-wnzj4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-p5tjv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-v49arj-control-plane-hhdmf, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-v49arj-control-plane-hhdmf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-v49arj-control-plane-hhdmf, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-v49arj-control-plane-hhdmf" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-576fm, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-94ntl, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-v49arj-control-plane-hhdmf" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-wz7mg [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-wnzj4, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-jkfw9, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-94ntl [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-v49arj-control-plane-hhdmf [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-v49arj-control-plane-hhdmf" [1mSTEP[0m: Fetching activity logs took 1.612944008s Jan 18 17:21:34.793: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 18 17:21:35.115: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-v49arj INFO: Waiting for the Cluster self-hosted/self-hosted-v49arj to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-v49arj to be deleted ... skipping 74 lines ... Jan 18 17:19:49.560: INFO: Collecting logs for Windows node quick-sta-95pkw in cluster quick-start-mnp8hb in namespace quick-start-5rzasq Jan 18 17:22:27.111: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-95pkw to /logs/artifacts/clusters/quick-start-mnp8hb/machines/quick-start-mnp8hb-md-win-95cb8cb86-485gr/crashdumps.tar Jan 18 17:22:30.454: INFO: Collecting boot logs for AzureMachine quick-start-mnp8hb-md-win-95pkw Failed to get logs for machine quick-start-mnp8hb-md-win-95cb8cb86-485gr, cluster quick-start-5rzasq/quick-start-mnp8hb: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 18 17:22:31.959: INFO: Collecting logs for Windows node quick-sta-rcvz9 in cluster quick-start-mnp8hb in namespace quick-start-5rzasq Jan 18 17:25:08.848: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-rcvz9 to /logs/artifacts/clusters/quick-start-mnp8hb/machines/quick-start-mnp8hb-md-win-95cb8cb86-7xf77/crashdumps.tar Jan 18 17:25:12.157: INFO: Collecting boot logs for AzureMachine quick-start-mnp8hb-md-win-rcvz9 Failed to get logs for machine quick-start-mnp8hb-md-win-95cb8cb86-7xf77, cluster quick-start-5rzasq/quick-start-mnp8hb: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-5rzasq/quick-start-mnp8hb kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.157556476s [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-d8gsm [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-hlrjr [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-mnp8hb-control-plane-crlxl [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-zc56r [1mSTEP[0m: Dumping workload cluster quick-start-5rzasq/quick-start-mnp8hb Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-ps7pz, container csi-proxy [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-mnp8hb-control-plane-crlxl" [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-tvvxt [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-hqrxq [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-4lv2h [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-p7z82 [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-tvvxt, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fljwh, container calico-node ... skipping 6 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-fljwh [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-d8gsm, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-p7z82, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-mnp8hb-control-plane-crlxl [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-mnp8hb-control-plane-crlxl [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-mnp8hb-control-plane-crlxl, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-mnp8hb-control-plane-crlxl" [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-mnp8hb-control-plane-crlxl" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-hsvg2, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-d8gsm, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-mnp8hb-control-plane-crlxl [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-cccdg [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-hsvg2 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-st54f ... skipping 3 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-jp56d, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-q5xcv, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-jp56d [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-q5xcv [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-mnp8hb-control-plane-crlxl, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-zc56r, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-mnp8hb-control-plane-crlxl" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-st54f, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-q5xcv, container calico-node-startup [1mSTEP[0m: Fetching activity logs took 1.534257849s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-5rzasq" namespace [1mSTEP[0m: Deleting cluster quick-start-5rzasq/quick-start-mnp8hb [1mSTEP[0m: Deleting cluster quick-start-mnp8hb ... skipping 86 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-ddxsl, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-2lv7at-control-plane-r5m4c, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-9g7xf, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-vg7hb [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-2lv7at-control-plane-r5m4c, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-2lv7at-control-plane-r5m4c [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-2lv7at-control-plane-r5m4c" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-pxc9r, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-2lv7at-control-plane-r5m4c [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-2lv7at-control-plane-r5m4c" [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-9g7xf [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-m5sq4, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-m5sq4 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-2lv7at-control-plane-r5m4c, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-x7rkc, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-x7rkc [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-ddxsl [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-ddxsl, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-pxc9r [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-95gkj, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-2lv7at-control-plane-r5m4c, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-vg7hb, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-2lv7at-control-plane-r5m4c [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-2lv7at-control-plane-r5m4c" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-2lv7at-control-plane-r5m4c [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-2lv7at-control-plane-r5m4c" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-95gkj [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-7jjg7, container calico-node: pods "machine-pool-2lv7at-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-ddxsl, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-ddxsl, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-x7rkc, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-pxc9r, container kube-proxy: pods "machine-pool-2lv7at-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 2.256936645s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-ryc4wu" namespace [1mSTEP[0m: Deleting cluster machine-pool-ryc4wu/machine-pool-2lv7at [1mSTEP[0m: Deleting cluster machine-pool-2lv7at INFO: Waiting for the Cluster machine-pool-ryc4wu/machine-pool-2lv7at to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-2lv7at to be deleted ... skipping 214 lines ... Jan 18 17:23:01.231: INFO: Collecting logs for Windows node md-scale-ghb9q in cluster md-scale-ht9d7r in namespace md-scale-afr0u8 Jan 18 17:25:40.093: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-ghb9q to /logs/artifacts/clusters/md-scale-ht9d7r/machines/md-scale-ht9d7r-md-win-56c54cd9cc-2v6c8/crashdumps.tar Jan 18 17:25:43.456: INFO: Collecting boot logs for AzureMachine md-scale-ht9d7r-md-win-ghb9q Failed to get logs for machine md-scale-ht9d7r-md-win-56c54cd9cc-2v6c8, cluster md-scale-afr0u8/md-scale-ht9d7r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 18 17:25:44.984: INFO: Collecting logs for Windows node md-scale-kb4st in cluster md-scale-ht9d7r in namespace md-scale-afr0u8 Jan 18 17:28:23.189: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-kb4st to /logs/artifacts/clusters/md-scale-ht9d7r/machines/md-scale-ht9d7r-md-win-56c54cd9cc-g5kts/crashdumps.tar Jan 18 17:28:26.589: INFO: Collecting boot logs for AzureMachine md-scale-ht9d7r-md-win-kb4st Failed to get logs for machine md-scale-ht9d7r-md-win-56c54cd9cc-g5kts, cluster md-scale-afr0u8/md-scale-ht9d7r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-afr0u8/md-scale-ht9d7r kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.104785798s [1mSTEP[0m: Dumping workload cluster md-scale-afr0u8/md-scale-ht9d7r Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-vzkhn [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-ht9d7r-control-plane-dcc6k [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-74b7g, container calico-kube-controllers ... skipping 110 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-kj1vr5" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-dmvb25/node-drain-kj1vr5 logs Jan 18 17:28:27.529: INFO: Collecting logs for Linux node node-drain-kj1vr5-control-plane-r2d4g in cluster node-drain-kj1vr5 in namespace node-drain-dmvb25 Jan 18 17:35:01.965: INFO: Collecting boot logs for AzureMachine node-drain-kj1vr5-control-plane-r2d4g Failed to get logs for machine node-drain-kj1vr5-control-plane-82npc, cluster node-drain-dmvb25/node-drain-kj1vr5: dialing public load balancer at node-drain-kj1vr5-5a98ca87.uksouth.cloudapp.azure.com: dial tcp 20.108.113.31:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-dmvb25/node-drain-kj1vr5 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.08534499s [1mSTEP[0m: Dumping workload cluster node-drain-dmvb25/node-drain-kj1vr5 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-czqk9, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-kj1vr5-control-plane-r2d4g, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-node-drain-kj1vr5-control-plane-r2d4g ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully set and use node drain timeout [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:183[0m A node should be forcefully removed if it cannot be drained in time [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.9/e2e/node_drain_timeout.go:83[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-18T21:00:29Z"} ++ early_exit_handler ++ '[' -n 162 ']' ++ kill -TERM 162 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-18T21:15:29Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-18T21:15:29Z"}