Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.5 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 578 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 260.991808ms [1mSTEP[0m: Dumping workload cluster kcp-adoption-u7kb3b/kcp-adoption-644m29 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-kcp-adoption-644m29-control-plane-0, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-p7n5b [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-qlzlb, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/etcd-kcp-adoption-644m29-control-plane-0 [1mSTEP[0m: failed to find events of Pod "etcd-kcp-adoption-644m29-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4nxhh, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-rmbxr, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-cxv7k, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-qlzlb [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-rmbxr [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-644m29-control-plane-0 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-644m29-control-plane-0, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "kube-apiserver-kcp-adoption-644m29-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-644m29-control-plane-0, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-644m29-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-kcp-adoption-644m29-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-p7n5b, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-4nxhh [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-cxv7k [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-644m29-control-plane-0, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-644m29-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-kcp-adoption-644m29-control-plane-0" [1mSTEP[0m: Fetching activity logs took 1.458415151s [1mSTEP[0m: Dumping all the Cluster API resources in the "kcp-adoption-u7kb3b" namespace [1mSTEP[0m: Deleting cluster kcp-adoption-u7kb3b/kcp-adoption-644m29 [1mSTEP[0m: Deleting cluster kcp-adoption-644m29 INFO: Waiting for the Cluster kcp-adoption-u7kb3b/kcp-adoption-644m29 to be deleted [1mSTEP[0m: Waiting for cluster kcp-adoption-644m29 to be deleted ... skipping 16 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Thu, 12 Jan 2023 17:08:42 UTC on Ginkgo node 6 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 12 17:08:43.000: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/12 17:08:43 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-e5fxcj" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-e5fxcj --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 65 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-e5fxcj-control-plane-x2vqm, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-e5fxcj-control-plane-x2vqm, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-g5p6x, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cnfm5 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cnfm5, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-e5fxcj-control-plane-x2vqm [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-e5fxcj-control-plane-x2vqm" [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-e5fxcj-control-plane-x2vqm" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-e5fxcj-control-plane-x2vqm, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-2sq5n, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-2sq5n [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-g5p6x [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-jr79t [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-m29tl [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-e5fxcj Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-7ml5t, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-m29tl, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-7ml5t [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-d2vtz [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-jr79t, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-e5fxcj-control-plane-x2vqm [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-e5fxcj-control-plane-x2vqm" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-e5fxcj-control-plane-x2vqm, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-e5fxcj-control-plane-x2vqm [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-e5fxcj-control-plane-x2vqm" [1mSTEP[0m: Fetching activity logs took 7.310048667s Jan 12 17:17:41.692: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 12 17:17:42.133: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-e5fxcj INFO: Waiting for the Cluster self-hosted/self-hosted-e5fxcj to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-e5fxcj to be deleted INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7968bc94b8-m5jnc, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6c76c59d6b-wz778, container manager: http2: client connection lost Jan 12 17:22:22.386: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 12 17:22:22.415: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Jan 12 17:22:54.239: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 69 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-shjwx [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-dt7zl [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-zxhwt, container calico-node [1mSTEP[0m: Dumping workload cluster mhc-remediation-urrtq6/mhc-remediation-o1lm5s Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-z449j, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-o1lm5s-control-plane-77wtk [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-o1lm5s-control-plane-77wtk" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-2ssx2, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-2ssx2 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-shjwx, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-zxhwt [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-dt7zl, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-gw9mm, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-4fhr6 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-gw9mm [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-z449j [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-o1lm5s-control-plane-77wtk, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-o1lm5s-control-plane-77wtk, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-o1lm5s-control-plane-77wtk [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-o1lm5s-control-plane-77wtk" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-o1lm5s-control-plane-77wtk, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-o1lm5s-control-plane-77wtk, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4fhr6, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-o1lm5s-control-plane-77wtk [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-o1lm5s-control-plane-77wtk" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-o1lm5s-control-plane-77wtk [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-o1lm5s-control-plane-77wtk" [1mSTEP[0m: Fetching activity logs took 22.927858851s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-urrtq6" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-urrtq6/mhc-remediation-o1lm5s [1mSTEP[0m: Deleting cluster mhc-remediation-o1lm5s INFO: Waiting for the Cluster mhc-remediation-urrtq6/mhc-remediation-o1lm5s to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-o1lm5s to be deleted ... skipping 208 lines ... Jan 12 17:17:22.080: INFO: Collecting logs for Windows node quick-sta-626g5 in cluster quick-start-rfco57 in namespace quick-start-9v7i71 Jan 12 17:19:53.288: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-626g5 to /logs/artifacts/clusters/quick-start-rfco57/machines/quick-start-rfco57-md-win-94c79466c-b8pdg/crashdumps.tar Jan 12 17:19:54.996: INFO: Collecting boot logs for AzureMachine quick-start-rfco57-md-win-626g5 Failed to get logs for machine quick-start-rfco57-md-win-94c79466c-b8pdg, cluster quick-start-9v7i71/quick-start-rfco57: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 12 17:19:55.766: INFO: Collecting logs for Windows node quick-sta-jnwsg in cluster quick-start-rfco57 in namespace quick-start-9v7i71 Jan 12 17:22:29.594: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-jnwsg to /logs/artifacts/clusters/quick-start-rfco57/machines/quick-start-rfco57-md-win-94c79466c-kwdmd/crashdumps.tar Jan 12 17:22:31.349: INFO: Collecting boot logs for AzureMachine quick-start-rfco57-md-win-jnwsg Failed to get logs for machine quick-start-rfco57-md-win-94c79466c-kwdmd, cluster quick-start-9v7i71/quick-start-rfco57: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-9v7i71/quick-start-rfco57 kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-8tcmb [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-r9wc6, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-459gf [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-r9wc6, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-4fnsx, container calico-node-felix ... skipping 14 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-hl5sq [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-sqqt6, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-mzbg5, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-rfco57-control-plane-48p6c, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-rfco57-control-plane-48p6c [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-9qv84, container csi-proxy [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-rfco57-control-plane-48p6c" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-rfco57-control-plane-48p6c, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-rfco57-control-plane-48p6c [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-rfco57-control-plane-48p6c" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-rfco57-control-plane-48p6c, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-mzbg5 [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-9qv84 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-sqqt6 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-v4b4t [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-ljrmd, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-rfco57-control-plane-48p6c [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-5mcvc [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-rfco57-control-plane-48p6c" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-v4b4t, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-ljrmd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5mcvc, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-rfco57-control-plane-48p6c, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-2hzx8, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-2hzx8 ... skipping 85 lines ... Jan 12 17:23:56.866: INFO: Collecting logs for Windows node md-scale-gbq2q in cluster md-scale-cyr2jk in namespace md-scale-1v0nb1 Jan 12 17:26:24.306: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-gbq2q to /logs/artifacts/clusters/md-scale-cyr2jk/machines/md-scale-cyr2jk-md-win-69844b878d-d5d4t/crashdumps.tar Jan 12 17:26:26.070: INFO: Collecting boot logs for AzureMachine md-scale-cyr2jk-md-win-gbq2q Failed to get logs for machine md-scale-cyr2jk-md-win-69844b878d-d5d4t, cluster md-scale-1v0nb1/md-scale-cyr2jk: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 12 17:26:26.957: INFO: Collecting logs for Windows node md-scale-kvwbz in cluster md-scale-cyr2jk in namespace md-scale-1v0nb1 Jan 12 17:29:00.681: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-kvwbz to /logs/artifacts/clusters/md-scale-cyr2jk/machines/md-scale-cyr2jk-md-win-69844b878d-gr74m/crashdumps.tar Jan 12 17:29:02.511: INFO: Collecting boot logs for AzureMachine md-scale-cyr2jk-md-win-kvwbz Failed to get logs for machine md-scale-cyr2jk-md-win-69844b878d-gr74m, cluster md-scale-1v0nb1/md-scale-cyr2jk: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-1v0nb1/md-scale-cyr2jk kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 438.803309ms [1mSTEP[0m: Dumping workload cluster md-scale-1v0nb1/md-scale-cyr2jk Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-8d922 [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-8jpnl, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-8jpnl ... skipping 110 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-qbbkb7" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-91bt3e/node-drain-qbbkb7 logs Jan 12 17:26:07.319: INFO: Collecting logs for Linux node node-drain-qbbkb7-control-plane-vp48w in cluster node-drain-qbbkb7 in namespace node-drain-91bt3e Jan 12 17:32:42.498: INFO: Collecting boot logs for AzureMachine node-drain-qbbkb7-control-plane-vp48w Failed to get logs for machine node-drain-qbbkb7-control-plane-qc7xv, cluster node-drain-91bt3e/node-drain-qbbkb7: dialing public load balancer at node-drain-qbbkb7-703d28ed.eastus.cloudapp.azure.com: dial tcp 20.81.39.180:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-91bt3e/node-drain-qbbkb7 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 408.795278ms [1mSTEP[0m: Dumping workload cluster node-drain-91bt3e/node-drain-qbbkb7 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-node-drain-qbbkb7-control-plane-vp48w, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-2wjt5, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-2wjt5 ... skipping 9 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-7x8tf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-node-drain-qbbkb7-control-plane-vp48w, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-ngrg6 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-k6r95 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-node-drain-qbbkb7-control-plane-vp48w [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-node-drain-qbbkb7-control-plane-vp48w [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-aqs390: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=0 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.001122667s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-91bt3e" namespace [1mSTEP[0m: Deleting cluster node-drain-91bt3e/node-drain-qbbkb7 [1mSTEP[0m: Deleting cluster node-drain-qbbkb7 INFO: Waiting for the Cluster node-drain-91bt3e/node-drain-qbbkb7 to be deleted [1mSTEP[0m: Waiting for cluster node-drain-qbbkb7 to be deleted ... skipping 75 lines ... Jan 12 17:33:20.201: INFO: Collecting boot logs for AzureMachine machine-pool-n2w05i-control-plane-kl5s4 [1mSTEP[0m: Dumping workload cluster machine-pool-h1hc0p/machine-pool-n2w05i kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 387.137369ms [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-ldm5k [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-n2w05i-control-plane-kl5s4 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-n2w05i-control-plane-kl5s4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-n2w05i-control-plane-kl5s4, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-w5nnb, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-w5nnb [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-7tjrb, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-n2w05i-control-plane-kl5s4 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-n2w05i-control-plane-kl5s4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-mf8vl, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-6tfzx, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-n2w05i-control-plane-kl5s4, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-6tfzx [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-rssgq, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-rssgq ... skipping 6 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-jwn4w [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-4mwlj, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-jwn4w, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-ldm5k, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-4mwlj, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-n2w05i-control-plane-kl5s4 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-n2w05i-control-plane-kl5s4" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-n2w05i-control-plane-kl5s4, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-n2w05i-control-plane-kl5s4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-n2w05i-control-plane-kl5s4, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-n2w05i-control-plane-kl5s4" [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-4mwlj, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-qn82p, container calico-node: pods "machine-pool-n2w05i-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-4mwlj, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-6tfzx, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-rssgq, container kube-proxy: pods "machine-pool-n2w05i-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 2.247065376s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-h1hc0p" namespace [1mSTEP[0m: Deleting cluster machine-pool-h1hc0p/machine-pool-n2w05i [1mSTEP[0m: Deleting cluster machine-pool-n2w05i INFO: Waiting for the Cluster machine-pool-h1hc0p/machine-pool-n2w05i to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-n2w05i to be deleted ... skipping 9 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully exercise machine pools [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:171[0m Should successfully create a cluster with machine pool machines [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/machine_pool.go:77[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-12T20:58:59Z"} ++ early_exit_handler ++ '[' -n 163 ']' ++ kill -TERM 163 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-12T21:13:59Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-12T21:13:59Z"}