Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.5 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [REQUIRED] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 581 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-kcp-adoption-tbkkmc-control-plane-0, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-tbkkmc-control-plane-0, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-mn4tp, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-dk22p [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-tbkkmc-control-plane-0 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-tbkkmc-control-plane-0, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "kube-scheduler-kcp-adoption-tbkkmc-control-plane-0" [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-gvkvs [1mSTEP[0m: Collecting events for Pod kube-system/etcd-kcp-adoption-tbkkmc-control-plane-0 [1mSTEP[0m: failed to find events of Pod "etcd-kcp-adoption-tbkkmc-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-tbkkmc-control-plane-0, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-tbkkmc-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-kcp-adoption-tbkkmc-control-plane-0" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-mn4tp [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-tbkkmc-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-kcp-adoption-tbkkmc-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-dk22p, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-blm88, container calico-node [1mSTEP[0m: Fetching activity logs took 1.64191083s [1mSTEP[0m: Dumping all the Cluster API resources in the "kcp-adoption-oqmo6z" namespace [1mSTEP[0m: Deleting cluster kcp-adoption-oqmo6z/kcp-adoption-tbkkmc [1mSTEP[0m: Deleting cluster kcp-adoption-tbkkmc ... skipping 75 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-z4mdt, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-veyg42-control-plane-zzs46 [1mSTEP[0m: Dumping workload cluster mhc-remediation-tw78yu/mhc-remediation-veyg42 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-bfnvk, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-fnqqp [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fnqqp, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-veyg42-control-plane-zzs46" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-z4mdt [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-c9q47, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-bfnvk [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-c9q47 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-jtfpw, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-jtfpw [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-veyg42-control-plane-zzs46 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-veyg42-control-plane-zzs46, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-veyg42-control-plane-zzs46" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-dd5qx, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-veyg42-control-plane-zzs46, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-veyg42-control-plane-zzs46 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-veyg42-control-plane-zzs46" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-rhjln, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-dd5qx [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-veyg42-control-plane-zzs46 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-rhjln [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-veyg42-control-plane-zzs46" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-veyg42-control-plane-zzs46, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-veyg42-control-plane-zzs46, container etcd [1mSTEP[0m: Fetching activity logs took 1.658167154s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-tw78yu" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-tw78yu/mhc-remediation-veyg42 [1mSTEP[0m: Deleting cluster mhc-remediation-veyg42 ... skipping 18 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sun, 25 Dec 2022 17:04:25 UTC on Ginkgo node 4 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Dec 25 17:04:25.887: INFO: starting to create namespace for hosting the "self-hosted" test spec 2022/12/25 17:04:25 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-h6a9f5" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-h6a9f5 --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 62 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 672.195433ms [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-h6a9f5 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-h6a9f5-control-plane-t52jg [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-kjkvv [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-jghwx, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-jghwx [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-h6a9f5-control-plane-t52jg" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-h6a9f5-control-plane-t52jg, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-h6a9f5-control-plane-t52jg [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-h6a9f5-control-plane-t52jg" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-h6a9f5-control-plane-t52jg, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-h6a9f5-control-plane-t52jg [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-pnnkv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-kjkvv, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-97z9w, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-h6a9f5-control-plane-t52jg, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-pnnkv [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-97z9w [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-cz5rd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-h6a9f5-control-plane-t52jg [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-kpncb [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-cz5rd, container coredns [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-h6a9f5-control-plane-t52jg" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-s2s8f [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-kpncb, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-s2s8f, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-h6a9f5-control-plane-t52jg, container etcd [1mSTEP[0m: Fetching activity logs took 2.278070806s Dec 25 17:15:32.602: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace ... skipping 78 lines ... Dec 25 17:13:15.445: INFO: Collecting logs for Windows node quick-sta-glkg4 in cluster quick-start-gc9mix in namespace quick-start-291pvn Dec 25 17:15:54.026: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-glkg4 to /logs/artifacts/clusters/quick-start-gc9mix/machines/quick-start-gc9mix-md-win-79c85bf5cd-9dbnv/crashdumps.tar Dec 25 17:15:56.986: INFO: Collecting boot logs for AzureMachine quick-start-gc9mix-md-win-glkg4 Failed to get logs for machine quick-start-gc9mix-md-win-79c85bf5cd-9dbnv, cluster quick-start-291pvn/quick-start-gc9mix: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 25 17:15:58.200: INFO: Collecting logs for Windows node quick-sta-f6jt9 in cluster quick-start-gc9mix in namespace quick-start-291pvn Dec 25 17:18:34.880: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-f6jt9 to /logs/artifacts/clusters/quick-start-gc9mix/machines/quick-start-gc9mix-md-win-79c85bf5cd-n7r9v/crashdumps.tar Dec 25 17:18:38.185: INFO: Collecting boot logs for AzureMachine quick-start-gc9mix-md-win-f6jt9 Failed to get logs for machine quick-start-gc9mix-md-win-79c85bf5cd-n7r9v, cluster quick-start-291pvn/quick-start-gc9mix: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-291pvn/quick-start-gc9mix kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-jtjsh [1mSTEP[0m: Fetching kube-system pod logs took 1.073531932s [1mSTEP[0m: Dumping workload cluster quick-start-291pvn/quick-start-gc9mix Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-cd6lf, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-cd6lf ... skipping 6 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-ptqnj [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-s7v22, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-z566n [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-xcnpj, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-n2mm8, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-gc9mix-control-plane-7pzh4 [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-gc9mix-control-plane-7pzh4" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-xcnpj, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-s7v22 [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-plfnd, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-s7v22, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-xcnpj [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-gc9mix-control-plane-7pzh4 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-gc9mix-control-plane-7pzh4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-gc9mix-control-plane-7pzh4, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-t2fzv [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-t2fzv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-gc9mix-control-plane-7pzh4, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-n2mm8 [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-plfnd [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-sxb2j, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-gc9mix-control-plane-7pzh4, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-gc9mix-control-plane-7pzh4 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-gc9mix-control-plane-7pzh4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-9bpjv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-dm6tm, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-dm6tm [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-s8zht [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-9bpjv [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-sxb2j [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-s8zht, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-tjb8s [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-gc9mix-control-plane-7pzh4 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-gc9mix-control-plane-7pzh4" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-tjb8s, container coredns [1mSTEP[0m: Fetching activity logs took 2.675177531s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-291pvn" namespace [1mSTEP[0m: Deleting cluster quick-start-291pvn/quick-start-gc9mix [1mSTEP[0m: Deleting cluster quick-start-gc9mix INFO: Waiting for the Cluster quick-start-291pvn/quick-start-gc9mix to be deleted ... skipping 80 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-pxfk7, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-7mk4p1-control-plane-xdp6x, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-zhkjt [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-7mk4p1-control-plane-xdp6x [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-7mk4p1-control-plane-xdp6x [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-7mk4p1-control-plane-xdp6x [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-7mk4p1-control-plane-xdp6x" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-7mk4p1-control-plane-xdp6x, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-7mk4p1-control-plane-xdp6x" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-7mk4p1-control-plane-xdp6x, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-jdv42, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-m6jq9, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-pxfk7 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-lj7dk, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-k788s [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-m6jq9 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-k788s, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-dk2m2, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-7mk4p1-control-plane-xdp6x, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-lj7dk [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-lj7dk, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-7mk4p1-control-plane-xdp6x [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-7mk4p1-control-plane-xdp6x" [1mSTEP[0m: Dumping workload cluster machine-pool-cp4hdz/machine-pool-7mk4p1 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-dk2m2 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-c9zxp, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-c9zxp [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-v4f72, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-v4f72 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-zhkjt, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-jdv42 [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-pxfk7, container calico-node: pods "machine-pool-7mk4p1-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-k788s, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-lj7dk, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-m6jq9, container kube-proxy: pods "machine-pool-7mk4p1-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-lj7dk, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Fetching activity logs took 1.830941707s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-cp4hdz" namespace [1mSTEP[0m: Deleting cluster machine-pool-cp4hdz/machine-pool-7mk4p1 [1mSTEP[0m: Deleting cluster machine-pool-7mk4p1 INFO: Waiting for the Cluster machine-pool-cp4hdz/machine-pool-7mk4p1 to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-7mk4p1 to be deleted ... skipping 78 lines ... Dec 25 17:15:47.995: INFO: Collecting logs for Windows node md-scale-t8wn6 in cluster md-scale-7s3ixj in namespace md-scale-gyvxhi Dec 25 17:18:29.433: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-t8wn6 to /logs/artifacts/clusters/md-scale-7s3ixj/machines/md-scale-7s3ixj-md-win-dfb68d49-prkfm/crashdumps.tar Dec 25 17:18:33.161: INFO: Collecting boot logs for AzureMachine md-scale-7s3ixj-md-win-t8wn6 Failed to get logs for machine md-scale-7s3ixj-md-win-dfb68d49-prkfm, cluster md-scale-gyvxhi/md-scale-7s3ixj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 25 17:18:34.385: INFO: Collecting logs for Windows node md-scale-mhng8 in cluster md-scale-7s3ixj in namespace md-scale-gyvxhi Dec 25 17:21:14.433: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-mhng8 to /logs/artifacts/clusters/md-scale-7s3ixj/machines/md-scale-7s3ixj-md-win-dfb68d49-r2vh9/crashdumps.tar Dec 25 17:21:17.983: INFO: Collecting boot logs for AzureMachine md-scale-7s3ixj-md-win-mhng8 Failed to get logs for machine md-scale-7s3ixj-md-win-dfb68d49-r2vh9, cluster md-scale-gyvxhi/md-scale-7s3ixj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-gyvxhi/md-scale-7s3ixj kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-k84sm [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-7s3ixj-control-plane-z6mgj [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-66mlm, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2c59q, container calico-node [1mSTEP[0m: Fetching kube-system pod logs took 1.12506295s ... skipping 246 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-eb739d" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-6k59yu/node-drain-eb739d logs Dec 25 17:23:07.210: INFO: Collecting logs for Linux node node-drain-eb739d-control-plane-f746v in cluster node-drain-eb739d in namespace node-drain-6k59yu Dec 25 17:29:42.441: INFO: Collecting boot logs for AzureMachine node-drain-eb739d-control-plane-f746v Failed to get logs for machine node-drain-eb739d-control-plane-bkbpg, cluster node-drain-6k59yu/node-drain-eb739d: dialing public load balancer at node-drain-eb739d-70c2c3c6.northeurope.cloudapp.azure.com: dial tcp 20.166.155.162:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-6k59yu/node-drain-eb739d kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.017975001s [1mSTEP[0m: Dumping workload cluster node-drain-6k59yu/node-drain-eb739d Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-node-drain-eb739d-control-plane-f746v, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-eb739d-control-plane-f746v, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-fshd4 ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully set and use node drain timeout [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:195[0m A node should be forcefully removed if it cannot be drained in time [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/node_drain_timeout.go:83[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2022-12-25T20:54:31Z"} ++ early_exit_handler ++ '[' -n 163 ']' ++ kill -TERM 163 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-12-25T21:09:32Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-12-25T21:09:32Z"}