Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 599 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-72dj9, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-gv868 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-0r623x-control-plane-4tvc7 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-fv76w [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-gv868, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-0r623x-control-plane-4tvc7 [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-0r623x-control-plane-4tvc7" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-0r623x-control-plane-4tvc7, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-0r623x-control-plane-4tvc7, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-xp8f7 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-0r623x-control-plane-4tvc7, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-xp8f7, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-0r623x-control-plane-4tvc7 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-0r623x-control-plane-4tvc7" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-0r623x-control-plane-4tvc7, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-0r623x-control-plane-4tvc7 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-0r623x-control-plane-4tvc7" [1mSTEP[0m: Fetching activity logs took 3.620888049s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-jkflk9" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-jkflk9/mhc-remediation-0r623x [1mSTEP[0m: Deleting cluster mhc-remediation-0r623x INFO: Waiting for the Cluster mhc-remediation-jkflk9/mhc-remediation-0r623x to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-0r623x to be deleted ... skipping 16 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sun, 01 Jan 2023 21:11:36 UTC on Ginkgo node 4 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 1 21:11:36.194: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/01 21:11:36 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-ujha7n" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-ujha7n --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 68 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-rppb6, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-vgxds [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-ujha7n-control-plane-8rxh6, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-ujha7n-control-plane-8rxh6 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-ujha7n-control-plane-8rxh6 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-ujha7n-control-plane-8rxh6 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-ujha7n-control-plane-8rxh6" [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-ujha7n-control-plane-8rxh6" [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-rppb6 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-ujha7n-control-plane-8rxh6, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2ntff, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9q5nk, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-fqjj5 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-bsxnd, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-9q5nk [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-bsxnd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cqrvt, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cqrvt [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fqjj5, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-ujha7n-control-plane-8rxh6 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-ujha7n-control-plane-8rxh6" [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-ujha7n-control-plane-8rxh6" [1mSTEP[0m: Fetching activity logs took 1.920472326s Jan 1 21:22:14.483: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 1 21:22:14.873: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-ujha7n INFO: Waiting for the Cluster self-hosted/self-hosted-ujha7n to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-ujha7n to be deleted INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8f6f78b8b-tj444, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-5b6d47468d-lpz2t, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-767ffc7f8-7khng, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-66968bb4c5-m6dw6, container manager: http2: client connection lost Jan 1 21:25:45.086: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 1 21:25:45.110: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Jan 1 21:26:21.759: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 68 lines ... Jan 1 21:20:06.075: INFO: Collecting logs for Windows node quick-sta-m9dgg in cluster quick-start-e437fi in namespace quick-start-4qu8yb Jan 1 21:22:46.009: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-m9dgg to /logs/artifacts/clusters/quick-start-e437fi/machines/quick-start-e437fi-md-win-546d6cf75f-ckjtb/crashdumps.tar Jan 1 21:22:47.774: INFO: Collecting boot logs for AzureMachine quick-start-e437fi-md-win-m9dgg Failed to get logs for machine quick-start-e437fi-md-win-546d6cf75f-ckjtb, cluster quick-start-4qu8yb/quick-start-e437fi: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 1 21:22:48.557: INFO: Collecting logs for Windows node quick-sta-wqdhj in cluster quick-start-e437fi in namespace quick-start-4qu8yb Jan 1 21:25:24.902: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-wqdhj to /logs/artifacts/clusters/quick-start-e437fi/machines/quick-start-e437fi-md-win-546d6cf75f-x9qr4/crashdumps.tar Jan 1 21:25:26.769: INFO: Collecting boot logs for AzureMachine quick-start-e437fi-md-win-wqdhj Failed to get logs for machine quick-start-e437fi-md-win-546d6cf75f-x9qr4, cluster quick-start-4qu8yb/quick-start-e437fi: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-4qu8yb/quick-start-e437fi kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-29m6t [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-2k9c4, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-f7tts [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-nfdm6, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-2k9c4 ... skipping 21 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-rvm44, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-kn9gd, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-lpd4p, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-kn9gd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-e437fi-control-plane-f6nrq [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-lpd4p [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-e437fi-control-plane-f6nrq" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-e437fi-control-plane-f6nrq, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-rvm44 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-e437fi-control-plane-f6nrq, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-e437fi-control-plane-f6nrq [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kdv6f [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-e437fi-control-plane-f6nrq" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-zlrmk, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kdv6f, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-e437fi-control-plane-f6nrq [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-e437fi-control-plane-f6nrq" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-e437fi-control-plane-f6nrq [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-e437fi-control-plane-f6nrq" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-e437fi-control-plane-f6nrq, container kube-apiserver [1mSTEP[0m: Fetching activity logs took 1.373047753s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-4qu8yb" namespace [1mSTEP[0m: Deleting cluster quick-start-4qu8yb/quick-start-e437fi [1mSTEP[0m: Deleting cluster quick-start-e437fi INFO: Waiting for the Cluster quick-start-4qu8yb/quick-start-e437fi to be deleted ... skipping 93 lines ... [1mSTEP[0m: Dumping workload cluster machine-pool-rps1ce/machine-pool-np2dke Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-shn7z, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-np2dke-control-plane-vfb64, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-4gv4n [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-4gv4n, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-np2dke-control-plane-vfb64 [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-np2dke-control-plane-vfb64" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-wc5sn, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-fbnn7 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-np2dke-control-plane-vfb64, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-np2dke-control-plane-vfb64 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-np2dke-control-plane-vfb64, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-nx6cs, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-wc5sn [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-np2dke-control-plane-vfb64" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-np2dke-control-plane-vfb64 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-np2dke-control-plane-vfb64" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9js62, container coredns [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-np2dke-control-plane-vfb64" [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-nx6cs, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-wc5sn, container kube-proxy: pods "machine-pool-np2dke-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-mpwv8, container calico-node: pods "machine-pool-np2dke-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-khttq, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-khttq, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Fetching activity logs took 2.719451093s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-rps1ce" namespace [1mSTEP[0m: Deleting cluster machine-pool-rps1ce/machine-pool-np2dke [1mSTEP[0m: Deleting cluster machine-pool-np2dke INFO: Waiting for the Cluster machine-pool-rps1ce/machine-pool-np2dke to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-np2dke to be deleted ... skipping 214 lines ... Jan 1 21:23:28.064: INFO: Collecting logs for Windows node md-scale-dccn5 in cluster md-scale-7s6awa in namespace md-scale-sk78xz Jan 1 21:26:07.930: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-dccn5 to /logs/artifacts/clusters/md-scale-7s6awa/machines/md-scale-7s6awa-md-win-68d6d6c44d-9z8mc/crashdumps.tar Jan 1 21:26:09.686: INFO: Collecting boot logs for AzureMachine md-scale-7s6awa-md-win-dccn5 Failed to get logs for machine md-scale-7s6awa-md-win-68d6d6c44d-9z8mc, cluster md-scale-sk78xz/md-scale-7s6awa: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 1 21:26:10.611: INFO: Collecting logs for Windows node md-scale-tg2ch in cluster md-scale-7s6awa in namespace md-scale-sk78xz Jan 1 21:28:49.181: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-tg2ch to /logs/artifacts/clusters/md-scale-7s6awa/machines/md-scale-7s6awa-md-win-68d6d6c44d-vk87m/crashdumps.tar Jan 1 21:28:50.953: INFO: Collecting boot logs for AzureMachine md-scale-7s6awa-md-win-tg2ch Failed to get logs for machine md-scale-7s6awa-md-win-68d6d6c44d-vk87m, cluster md-scale-sk78xz/md-scale-7s6awa: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-sk78xz/md-scale-7s6awa kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 410.07298ms [1mSTEP[0m: Dumping workload cluster md-scale-sk78xz/md-scale-7s6awa Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-8nbzt [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-tx2j7, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-7hx2m [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-jx7bq, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-tx2j7 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-h9ln2, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-jx7bq [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-k5c5z, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-h9ln2, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-7s6awa-control-plane-qtc74 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-7s6awa-control-plane-qtc74" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-k5c5z [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-tqjw8, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4t9ss, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-h9ln2 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-z8fvz, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-wldzd, container kube-proxy ... skipping 10 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-z8fvz [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-xdk9q, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-tqjw8 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-8nbzt, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-7s6awa-control-plane-qtc74, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-xdk9q [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-7s6awa-control-plane-qtc74" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-wldzd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-7s6awa-control-plane-qtc74, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-l4tn6 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-7s6awa-control-plane-qtc74 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-bhjpb, container kube-proxy [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-7s6awa-control-plane-qtc74" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-7s6awa-control-plane-qtc74 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-7s6awa-control-plane-qtc74" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-7s6awa-control-plane-qtc74, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-bhjpb [1mSTEP[0m: Fetching activity logs took 4.602254618s [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-sk78xz" namespace [1mSTEP[0m: Deleting cluster md-scale-sk78xz/md-scale-7s6awa [1mSTEP[0m: Deleting cluster md-scale-7s6awa ... skipping 69 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-v6gph9" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-frprye/node-drain-v6gph9 logs Jan 1 21:26:55.370: INFO: Collecting logs for Linux node node-drain-v6gph9-control-plane-ghxzq in cluster node-drain-v6gph9 in namespace node-drain-frprye Jan 1 21:33:29.734: INFO: Collecting boot logs for AzureMachine node-drain-v6gph9-control-plane-ghxzq Failed to get logs for machine node-drain-v6gph9-control-plane-nsjfz, cluster node-drain-frprye/node-drain-v6gph9: dialing public load balancer at node-drain-v6gph9-5b4a24b5.canadacentral.cloudapp.azure.com: dial tcp 20.175.153.160:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-frprye/node-drain-v6gph9 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 403.93448ms [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-node-drain-v6gph9-control-plane-ghxzq, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-node-drain-v6gph9-control-plane-ghxzq, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-ffx2w [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-jklc2, container calico-kube-controllers ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully set and use node drain timeout [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:183[0m A node should be forcefully removed if it cannot be drained in time [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.6/e2e/node_drain_timeout.go:83[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-02T01:02:51Z"} ++ early_exit_handler ++ '[' -n 164 ']' ++ kill -TERM 164 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-02T01:17:51Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-02T01:17:51Z"}