Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.5 |
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [REQUIRED] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 580 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-kcp-adoption-uv3ozl-control-plane-0, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-m954m [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-8knt2 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-uv3ozl-control-plane-0, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-8knt2, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/etcd-kcp-adoption-uv3ozl-control-plane-0 [1mSTEP[0m: failed to find events of Pod "etcd-kcp-adoption-uv3ozl-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-m954m, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kq2dc, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-g89pk, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-uv3ozl-control-plane-0, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-z7jg5, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kq2dc [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-uv3ozl-control-plane-0, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-g89pk [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-uv3ozl-control-plane-0 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-z7jg5 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-uv3ozl-control-plane-0 [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-uv3ozl-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-kcp-adoption-uv3ozl-control-plane-0" [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-kcp-adoption-uv3ozl-control-plane-0" [1mSTEP[0m: Fetching activity logs took 3.219787172s [1mSTEP[0m: Dumping all the Cluster API resources in the "kcp-adoption-ed4dmm" namespace [1mSTEP[0m: Deleting cluster kcp-adoption-ed4dmm/kcp-adoption-uv3ozl [1mSTEP[0m: Deleting cluster kcp-adoption-uv3ozl INFO: Waiting for the Cluster kcp-adoption-ed4dmm/kcp-adoption-uv3ozl to be deleted [1mSTEP[0m: Waiting for cluster kcp-adoption-uv3ozl to be deleted ... skipping 74 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-4gksg, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-fefguo-control-plane-k6szn, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-fefguo-control-plane-k6szn [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-8xf7j, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-8xf7j [1mSTEP[0m: Dumping workload cluster mhc-remediation-c8q3xz/mhc-remediation-fefguo Azure activity log [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-fefguo-control-plane-k6szn" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-hq94h, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-hq94h [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-fefguo-control-plane-k6szn [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-577b4 [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-fefguo-control-plane-k6szn" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-982t2 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cds7t, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-q857f, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cds7t [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-fefguo-control-plane-k6szn, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-q857f [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-fefguo-control-plane-k6szn [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-fefguo-control-plane-k6szn" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-577b4, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-4gksg [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-fefguo-control-plane-k6szn, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-fefguo-control-plane-k6szn [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-fefguo-control-plane-k6szn" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-fefguo-control-plane-k6szn, container kube-controller-manager [1mSTEP[0m: Fetching activity logs took 992.240983ms [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-c8q3xz" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-c8q3xz/mhc-remediation-fefguo [1mSTEP[0m: Deleting cluster mhc-remediation-fefguo INFO: Waiting for the Cluster mhc-remediation-c8q3xz/mhc-remediation-fefguo to be deleted ... skipping 17 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Tue, 03 Jan 2023 17:03:36 UTC on Ginkgo node 10 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 3 17:03:36.433: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/03 17:03:36 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-9k9pch" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-9k9pch --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 66 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-5x5z4 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-rp6zf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-9k9pch-control-plane-jm6cr, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-9k9pch-control-plane-jm6cr [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-rp6zf, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-s5gl2, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-9k9pch-control-plane-jm6cr" [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-s5gl2 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-9k9pch-control-plane-jm6cr, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-8pgl9 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-9k9pch-control-plane-jm6cr [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-zltt9, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5wfcq, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-9k9pch-control-plane-jm6cr" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-9k9pch-control-plane-jm6cr, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-9k9pch-control-plane-jm6cr [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-9k9pch-control-plane-jm6cr [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-9k9pch-control-plane-jm6cr" [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-9k9pch-control-plane-jm6cr" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-6wp8k [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-zltt9 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-6wp8k, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-5x5z4, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-9k9pch-control-plane-jm6cr, container kube-apiserver [1mSTEP[0m: Fetching activity logs took 1.613791467s ... skipping 91 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-hnkl6, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-l7qxwg-control-plane-6hqg2, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-l7qxwg-control-plane-6hqg2 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-hnkl6 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-cgqf2, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-sgxmh, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-l7qxwg-control-plane-6hqg2" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-l7qxwg-control-plane-6hqg2, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-l7qxwg-control-plane-6hqg2, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-l7qxwg-control-plane-6hqg2, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-mk8mt, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-l7qxwg-control-plane-6hqg2 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-mk8mt [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-l7qxwg-control-plane-6hqg2" [1mSTEP[0m: Dumping workload cluster machine-pool-pmxand/machine-pool-l7qxwg Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-l7qxwg-control-plane-6hqg2 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-mk8mt, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-kbcm6, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-5vbnf, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-kbcm6 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-5vbnf [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-grl2s [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-grl2s, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-sgxmh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-w6rwd, container kube-proxy [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-l7qxwg-control-plane-6hqg2" [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-mk8mt, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-w6rwd, container kube-proxy: pods "machine-pool-l7qxwg-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-mk8mt, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-hnkl6, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-kbcm6, container calico-node: pods "machine-pool-l7qxwg-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 2.166000489s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-pmxand" namespace [1mSTEP[0m: Deleting cluster machine-pool-pmxand/machine-pool-l7qxwg [1mSTEP[0m: Deleting cluster machine-pool-l7qxwg INFO: Waiting for the Cluster machine-pool-pmxand/machine-pool-l7qxwg to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-l7qxwg to be deleted ... skipping 208 lines ... Jan 3 17:13:06.282: INFO: Collecting logs for Windows node quick-sta-cmb5r in cluster quick-start-tx8i7f in namespace quick-start-a4pv5i Jan 3 17:15:44.426: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-cmb5r to /logs/artifacts/clusters/quick-start-tx8i7f/machines/quick-start-tx8i7f-md-win-66f8d8475f-8tmr2/crashdumps.tar Jan 3 17:15:46.913: INFO: Collecting boot logs for AzureMachine quick-start-tx8i7f-md-win-cmb5r Failed to get logs for machine quick-start-tx8i7f-md-win-66f8d8475f-8tmr2, cluster quick-start-a4pv5i/quick-start-tx8i7f: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 3 17:15:47.882: INFO: Collecting logs for Windows node quick-sta-qt8w9 in cluster quick-start-tx8i7f in namespace quick-start-a4pv5i Jan 3 17:18:33.030: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-qt8w9 to /logs/artifacts/clusters/quick-start-tx8i7f/machines/quick-start-tx8i7f-md-win-66f8d8475f-wqgtx/crashdumps.tar Jan 3 17:18:35.460: INFO: Collecting boot logs for AzureMachine quick-start-tx8i7f-md-win-qt8w9 Failed to get logs for machine quick-start-tx8i7f-md-win-66f8d8475f-wqgtx, cluster quick-start-a4pv5i/quick-start-tx8i7f: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-a4pv5i/quick-start-tx8i7f kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-qgzx8, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-r9xvv, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-btrt2, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-fdl85, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-fdl85 ... skipping 17 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-tx8i7f-control-plane-qjmmj [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-tx8i7f-control-plane-qjmmj [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-tx8i7f-control-plane-qjmmj [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-hmtbc [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-s59hx [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-pvqx5 [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-tx8i7f-control-plane-qjmmj" [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-tx8i7f-control-plane-qjmmj" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-tx8i7f-control-plane-qjmmj, container kube-apiserver [1mSTEP[0m: Fetching kube-system pod logs took 665.519777ms [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-s59hx, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-tx8i7f-control-plane-qjmmj, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-tx8i7f-control-plane-qjmmj [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-tx8i7f-control-plane-qjmmj" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-tx8i7f-control-plane-qjmmj, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-j8fk6, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-j8fk6 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-thzv8, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-thzv8 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-tx8i7f-control-plane-qjmmj" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-pvqx5, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-tx8i7f-control-plane-qjmmj, container kube-scheduler [1mSTEP[0m: Dumping workload cluster quick-start-a4pv5i/quick-start-tx8i7f Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-hmtbc, container kube-proxy [1mSTEP[0m: Fetching activity logs took 8.347931085s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-a4pv5i" namespace ... skipping 82 lines ... Jan 3 17:16:35.383: INFO: Collecting logs for Windows node md-scale-hdjlh in cluster md-scale-vpx4t3 in namespace md-scale-bfam7m Jan 3 17:19:16.534: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-hdjlh to /logs/artifacts/clusters/md-scale-vpx4t3/machines/md-scale-vpx4t3-md-win-db8f65f48-d229t/crashdumps.tar Jan 3 17:19:18.961: INFO: Collecting boot logs for AzureMachine md-scale-vpx4t3-md-win-hdjlh Failed to get logs for machine md-scale-vpx4t3-md-win-db8f65f48-d229t, cluster md-scale-bfam7m/md-scale-vpx4t3: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 3 17:19:20.147: INFO: Collecting logs for Windows node md-scale-nn28k in cluster md-scale-vpx4t3 in namespace md-scale-bfam7m Jan 3 17:22:05.983: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-nn28k to /logs/artifacts/clusters/md-scale-vpx4t3/machines/md-scale-vpx4t3-md-win-db8f65f48-f6br2/crashdumps.tar Jan 3 17:22:08.452: INFO: Collecting boot logs for AzureMachine md-scale-vpx4t3-md-win-nn28k Failed to get logs for machine md-scale-vpx4t3-md-win-db8f65f48-f6br2, cluster md-scale-bfam7m/md-scale-vpx4t3: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-bfam7m/md-scale-vpx4t3 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 680.488191ms [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-4jjgv, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-565ph, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-nn2t6 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-zk5ss, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-565ph, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-zk5ss [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-4jjgv, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-vpx4t3-control-plane-fskpl [1mSTEP[0m: Dumping workload cluster md-scale-bfam7m/md-scale-vpx4t3 Azure activity log [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-vpx4t3-control-plane-fskpl" [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-nfn82 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-x495n [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-565ph [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-d6lfg, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-hlqvp, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-x495n, container calico-node ... skipping 10 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-4jjgv [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-zsxpt [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-bc2bm [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-vpx4t3-control-plane-fskpl, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-vpx4t3-control-plane-fskpl [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-vpx4t3-control-plane-fskpl, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-vpx4t3-control-plane-fskpl" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-vpx4t3-control-plane-fskpl [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-vpx4t3-control-plane-fskpl" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-vpx4t3-control-plane-fskpl [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-vpx4t3-control-plane-fskpl" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4z2wq, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-zsxpt, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-vpx4t3-control-plane-fskpl, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-7vgb9, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-7vgb9 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-nn2t6, container kube-proxy ... skipping 75 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-14j7pr" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-pxlhv8/node-drain-14j7pr logs Jan 3 17:21:19.575: INFO: Collecting logs for Linux node node-drain-14j7pr-control-plane-b5lwt in cluster node-drain-14j7pr in namespace node-drain-pxlhv8 Jan 3 17:27:54.020: INFO: Collecting boot logs for AzureMachine node-drain-14j7pr-control-plane-b5lwt Failed to get logs for machine node-drain-14j7pr-control-plane-g7jsz, cluster node-drain-pxlhv8/node-drain-14j7pr: dialing public load balancer at node-drain-14j7pr-b701f97b.westus2.cloudapp.azure.com: dial tcp 40.64.106.230:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-pxlhv8/node-drain-14j7pr kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 591.722243ms [1mSTEP[0m: Dumping workload cluster node-drain-pxlhv8/node-drain-14j7pr Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-m2zrt, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-14j7pr-control-plane-b5lwt, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-node-drain-14j7pr-control-plane-b5lwt, container etcd ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully set and use node drain timeout [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:195[0m A node should be forcefully removed if it cannot be drained in time [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/node_drain_timeout.go:83[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-03T20:54:54Z"} ++ early_exit_handler ++ '[' -n 164 ']' ++ kill -TERM 164 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-03T21:09:54Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-03T21:09:54Z"}