Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 599 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 273.946758ms [1mSTEP[0m: Dumping workload cluster mhc-remediation-vy3b34/mhc-remediation-856vgx Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-rqgbl, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-856vgx-control-plane-kcmcl, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-856vgx-control-plane-kcmcl [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-2nd6w [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-856vgx-control-plane-kcmcl" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-l7zn4, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-l7zn4 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-rqgbl [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-856vgx-control-plane-kcmcl, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2nd6w, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-w85tj ... skipping 23 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Fri, 20 Jan 2023 17:09:07 UTC on Ginkgo node 4 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 20 17:09:07.908: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/20 17:09:07 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-af4q5s" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-af4q5s --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 67 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-af4q5s-control-plane-dg4q6, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-af4q5s-control-plane-dg4q6 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-7qvxl, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-hmm7t, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-4rjzz [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-7qvxl [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-af4q5s-control-plane-dg4q6" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-qq9lk [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-hmm7t [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-9qg7c, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-qq9lk, container calico-node [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-af4q5s Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-af4q5s-control-plane-dg4q6, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-af4q5s-control-plane-dg4q6, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-4rjzz, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-7tl4w [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-af4q5s-control-plane-dg4q6, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-af4q5s-control-plane-dg4q6 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-af4q5s-control-plane-dg4q6" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-af4q5s-control-plane-dg4q6 [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-af4q5s-control-plane-dg4q6" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-af4q5s-control-plane-dg4q6 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-af4q5s-control-plane-dg4q6" [1mSTEP[0m: Fetching activity logs took 1.747940103s Jan 20 17:19:18.880: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 20 17:19:19.338: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-af4q5s INFO: Waiting for the Cluster self-hosted/self-hosted-af4q5s to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-af4q5s to be deleted INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-dbbcc9f86-wwqgb, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6599479f7c-vdjs7, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-c9b79fd49-t9vc6, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6d49765f-cxdj7, container manager: http2: client connection lost Jan 20 17:23:59.510: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 20 17:23:59.526: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Jan 20 17:24:24.350: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 232 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-node-drain-grmfpb-control-plane-nfv2x, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-node-drain-grmfpb-control-plane-gqpc5 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-node-drain-grmfpb-control-plane-nfv2x [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-grmfpb-control-plane-nfv2x, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-node-drain-grmfpb-control-plane-nfv2x [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-grmfpb-control-plane-gqpc5, container kube-apiserver [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-2q4mk, container calico-node: pods "node-drain-grmfpb-control-plane-nfv2x" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-cmdz7, container kube-proxy: pods "node-drain-grmfpb-control-plane-nfv2x" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-grmfpb-control-plane-nfv2x, container kube-apiserver: pods "node-drain-grmfpb-control-plane-nfv2x" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-grmfpb-control-plane-nfv2x, container kube-controller-manager: pods "node-drain-grmfpb-control-plane-nfv2x" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-grmfpb-control-plane-nfv2x, container kube-scheduler: pods "node-drain-grmfpb-control-plane-nfv2x" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/etcd-node-drain-grmfpb-control-plane-nfv2x, container etcd: pods "node-drain-grmfpb-control-plane-nfv2x" not found [1mSTEP[0m: Fetching activity logs took 3.174308048s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-r6wfei" namespace [1mSTEP[0m: Deleting cluster node-drain-r6wfei/node-drain-grmfpb [1mSTEP[0m: Deleting cluster node-drain-grmfpb INFO: Waiting for the Cluster node-drain-r6wfei/node-drain-grmfpb to be deleted [1mSTEP[0m: Waiting for cluster node-drain-grmfpb to be deleted ... skipping 78 lines ... Jan 20 17:20:13.762: INFO: Collecting logs for Windows node md-scale-jgmhv in cluster md-scale-21o2xn in namespace md-scale-0gjmr2 Jan 20 17:22:47.518: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-jgmhv to /logs/artifacts/clusters/md-scale-21o2xn/machines/md-scale-21o2xn-md-win-b8d76694c-bcbkk/crashdumps.tar Jan 20 17:22:49.307: INFO: Collecting boot logs for AzureMachine md-scale-21o2xn-md-win-jgmhv Failed to get logs for machine md-scale-21o2xn-md-win-b8d76694c-bcbkk, cluster md-scale-0gjmr2/md-scale-21o2xn: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 20 17:22:50.105: INFO: Collecting logs for Windows node md-scale-hpcc7 in cluster md-scale-21o2xn in namespace md-scale-0gjmr2 Jan 20 17:25:25.208: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-hpcc7 to /logs/artifacts/clusters/md-scale-21o2xn/machines/md-scale-21o2xn-md-win-b8d76694c-wlmhm/crashdumps.tar Jan 20 17:25:27.092: INFO: Collecting boot logs for AzureMachine md-scale-21o2xn-md-win-hpcc7 Failed to get logs for machine md-scale-21o2xn-md-win-b8d76694c-wlmhm, cluster md-scale-0gjmr2/md-scale-21o2xn: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-0gjmr2/md-scale-21o2xn kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-qztqh [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-vsmxx [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-qztqh, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-vcnkp, container csi-proxy [1mSTEP[0m: Fetching kube-system pod logs took 414.068715ms [1mSTEP[0m: Dumping workload cluster md-scale-0gjmr2/md-scale-21o2xn Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-vcnkp [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-21o2xn-control-plane-jx8xp, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-lsqlb [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-21o2xn-control-plane-jx8xp [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-qw6h9 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-21o2xn-control-plane-jx8xp" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-vsmxx, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-21o2xn-control-plane-jx8xp [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-21o2xn-control-plane-jx8xp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-21o2xn-control-plane-jx8xp, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-r6wsl, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-lsqlb, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-dqkqg, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-96fth, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-smqdf, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-21o2xn-control-plane-jx8xp, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-21o2xn-control-plane-jx8xp [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-21o2xn-control-plane-jx8xp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-qw6h9, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-bgl5v, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-r6wsl [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-89xc5, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-dqkqg, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-bgl5v [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-21o2xn-control-plane-jx8xp [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-96fth [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-21o2xn-control-plane-jx8xp" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-878mk, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-878mk [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-dqkqg [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-jhhsb, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-jhhsb, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-ks5ns, container kube-proxy ... skipping 85 lines ... Jan 20 17:26:42.167: INFO: Collecting logs for Windows node quick-sta-ngpck in cluster quick-start-cnnmjx in namespace quick-start-6iwkx7 Jan 20 17:29:14.934: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-ngpck to /logs/artifacts/clusters/quick-start-cnnmjx/machines/quick-start-cnnmjx-md-win-665555c65f-qmn57/crashdumps.tar Jan 20 17:29:16.684: INFO: Collecting boot logs for AzureMachine quick-start-cnnmjx-md-win-ngpck Failed to get logs for machine quick-start-cnnmjx-md-win-665555c65f-qmn57, cluster quick-start-6iwkx7/quick-start-cnnmjx: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 20 17:29:17.462: INFO: Collecting logs for Windows node quick-sta-fmhv2 in cluster quick-start-cnnmjx in namespace quick-start-6iwkx7 Jan 20 17:31:50.284: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-fmhv2 to /logs/artifacts/clusters/quick-start-cnnmjx/machines/quick-start-cnnmjx-md-win-665555c65f-zltpl/crashdumps.tar Jan 20 17:31:51.962: INFO: Collecting boot logs for AzureMachine quick-start-cnnmjx-md-win-fmhv2 Failed to get logs for machine quick-start-cnnmjx-md-win-665555c65f-zltpl, cluster quick-start-6iwkx7/quick-start-cnnmjx: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-6iwkx7/quick-start-cnnmjx kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-4xh6q, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-pdhjh [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-h2m8w, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-ch2br [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-4xh6q ... skipping 15 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-lgv5c [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-88vn6, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-qnl9z [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-tnzss, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-cnnmjx-control-plane-vkspd, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-cnnmjx-control-plane-vkspd [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-cnnmjx-control-plane-vkspd" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-88vn6 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-rrzh4, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-tnzss [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-qnl9z, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-dz74k [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-pdhjh, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-cnnmjx-control-plane-vkspd [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-cnnmjx-control-plane-vkspd" [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-cgw6d [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-cgw6d, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-84r6f [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-cnnmjx-control-plane-vkspd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-cnnmjx-control-plane-vkspd, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-cnnmjx-control-plane-vkspd, container kube-controller-manager ... skipping 88 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-dj4n8, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-477wd, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kg6t4 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-vg6j2, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-bxsg6r-control-plane-9xz7k [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-bxsg6r-control-plane-9xz7k, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-bxsg6r-control-plane-9xz7k" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-qhx8m, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-bxsg6r-control-plane-9xz7k, container etcd [1mSTEP[0m: Fetching kube-system pod logs took 408.210519ms [1mSTEP[0m: Dumping workload cluster machine-pool-aaufnx/machine-pool-bxsg6r Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-bxsg6r-control-plane-9xz7k [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-r9fwx, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-477wd [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-bxsg6r-control-plane-9xz7k" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kg6t4, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-jh48h [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-lp9pz, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-dj4n8, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-bxsg6r-control-plane-9xz7k [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-bxsg6r-control-plane-9xz7k" [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-bxsg6r-control-plane-9xz7k [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-bxsg6r-control-plane-9xz7k" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-vg6j2 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-lp9pz [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-r9fwx [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-x8946, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-qhx8m [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-x8946 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-bxsg6r-control-plane-9xz7k, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-bxsg6r-control-plane-9xz7k, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-dj4n8 [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-dj4n8, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-jh48h, container calico-node: pods "machine-pool-bxsg6r-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-lp9pz, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-dj4n8, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-r9fwx, container kube-proxy: pods "machine-pool-bxsg6r-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 1.651769732s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-aaufnx" namespace [1mSTEP[0m: Deleting cluster machine-pool-aaufnx/machine-pool-bxsg6r [1mSTEP[0m: Deleting cluster machine-pool-bxsg6r INFO: Waiting for the Cluster machine-pool-aaufnx/machine-pool-bxsg6r to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-bxsg6r to be deleted ... skipping 9 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully exercise machine pools [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:159[0m Should successfully create a cluster with machine pool machines [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.9/e2e/machine_pool.go:77[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-20T21:00:37Z"} ++ early_exit_handler ++ '[' -n 166 ']' ++ kill -TERM 166 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-20T21:15:37Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-20T21:15:37Z"}