Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 591 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-drkv6k-control-plane-fh64f, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-vjcsg, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-vjcsg [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-drkv6k-control-plane-fh64f, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-drkv6k-control-plane-fh64f, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-drkv6k-control-plane-fh64f [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-drkv6k-control-plane-fh64f" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-drkv6k-control-plane-fh64f, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-drkv6k-control-plane-fh64f [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-drkv6k-control-plane-fh64f" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-rhddw, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-drkv6k-control-plane-fh64f [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-mqxt6 [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-dmchf [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-gm9d8, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-rhddw ... skipping 30 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sat, 07 Jan 2023 21:14:49 UTC on Ginkgo node 2 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 7 21:14:49.066: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/07 21:14:49 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-98absw" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-98absw --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 65 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-gn2wx [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-525kt [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-ml8cp [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-7frvs, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-525kt, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-98absw-control-plane-znr5k [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-98absw-control-plane-znr5k" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-98absw-control-plane-znr5k, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-9xlxz [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-7frvs [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-98absw-control-plane-znr5k, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-98absw-control-plane-znr5k, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-98absw-control-plane-znr5k [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-9xlxz, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-98absw-control-plane-znr5k" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-tthvv, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-98absw-control-plane-znr5k [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-98absw-control-plane-znr5k" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-tthvv [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-98absw-control-plane-znr5k [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-98absw-control-plane-znr5k" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-z9vc9, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-z9vc9 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-gn2wx, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-ml8cp, container calico-node [1mSTEP[0m: Fetching activity logs took 2.558736762s Jan 7 21:23:58.230: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 7 21:23:58.673: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-98absw INFO: Waiting for the Cluster self-hosted/self-hosted-98absw to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-98absw to be deleted INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-55668664dd-k7bp6, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-76db9b584f-hsgs8, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-69b8f8fdd4-kscnh, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-86f78ddd86-j2fjd, container manager: http2: client connection lost Jan 7 21:28:38.943: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 7 21:28:38.977: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Jan 7 21:29:26.689: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 204 lines ... Jan 7 21:24:34.886: INFO: Collecting logs for Windows node quick-sta-95cn4 in cluster quick-start-z2gmix in namespace quick-start-9pchsq Jan 7 21:27:13.793: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-95cn4 to /logs/artifacts/clusters/quick-start-z2gmix/machines/quick-start-z2gmix-md-win-77cd988cdb-gs5d9/crashdumps.tar Jan 7 21:27:15.780: INFO: Collecting boot logs for AzureMachine quick-start-z2gmix-md-win-95cn4 Failed to get logs for machine quick-start-z2gmix-md-win-77cd988cdb-gs5d9, cluster quick-start-9pchsq/quick-start-z2gmix: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 7 21:27:16.498: INFO: Collecting logs for Windows node quick-sta-7lm2r in cluster quick-start-z2gmix in namespace quick-start-9pchsq Jan 7 21:30:00.184: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-7lm2r to /logs/artifacts/clusters/quick-start-z2gmix/machines/quick-start-z2gmix-md-win-77cd988cdb-z45w2/crashdumps.tar Jan 7 21:30:02.397: INFO: Collecting boot logs for AzureMachine quick-start-z2gmix-md-win-7lm2r Failed to get logs for machine quick-start-z2gmix-md-win-77cd988cdb-z45w2, cluster quick-start-9pchsq/quick-start-z2gmix: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-9pchsq/quick-start-z2gmix kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-r8hsn, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-z2gmix-control-plane-rnmrp [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-z2gmix-control-plane-rnmrp" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-r8hsn, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-8zhcx, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-8zhcx [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-r8hsn [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-dwql5 [1mSTEP[0m: Fetching kube-system pod logs took 532.733804ms ... skipping 16 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-pnkdw [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-954jk [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-d7m2d, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-8k9tf [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-z2gmix-control-plane-rnmrp [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-z2gmix-control-plane-rnmrp, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-z2gmix-control-plane-rnmrp" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-z2gmix-control-plane-rnmrp [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-z2gmix-control-plane-rnmrp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-z2gmix-control-plane-rnmrp, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-z2gmix-control-plane-rnmrp [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-97szq, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-tx9h6, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-tx9h6 [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-dwql5, container csi-proxy ... skipping 88 lines ... [1mSTEP[0m: Dumping workload cluster machine-pool-vzknfy/machine-pool-wd56ut kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-gm55x [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-44686, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-8g2rw, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-wd56ut-control-plane-s6jq4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-s9tmf, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-wd56ut-control-plane-s6jq4" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-wd56ut-control-plane-s6jq4, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-9lffp [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5z5zw, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-gm55x, container kube-proxy [1mSTEP[0m: Fetching kube-system pod logs took 447.152575ms [1mSTEP[0m: Dumping workload cluster machine-pool-vzknfy/machine-pool-wd56ut Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-s9tmf [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-2n5t9, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-5z5zw [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-wd56ut-control-plane-s6jq4, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-wd56ut-control-plane-s6jq4 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-wd56ut-control-plane-s6jq4" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-2n5t9 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-8g2rw, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-wd56ut-control-plane-s6jq4, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-44686 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-hh62g, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-8g2rw [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-j9pss, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-j9pss [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9lffp, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-wd56ut-control-plane-s6jq4 [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-wd56ut-control-plane-s6jq4" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-hh62g [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-wd56ut-control-plane-s6jq4 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-wd56ut-control-plane-s6jq4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-wd56ut-control-plane-s6jq4, container kube-controller-manager [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-8g2rw, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-8g2rw, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-gm55x, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-5z5zw, container kube-proxy: pods "machine-pool-wd56ut-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-j9pss, container calico-node: pods "machine-pool-wd56ut-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 1.78161217s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-vzknfy" namespace [1mSTEP[0m: Deleting cluster machine-pool-vzknfy/machine-pool-wd56ut [1mSTEP[0m: Deleting cluster machine-pool-wd56ut INFO: Waiting for the Cluster machine-pool-vzknfy/machine-pool-wd56ut to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-wd56ut to be deleted ... skipping 78 lines ... Jan 7 21:27:57.769: INFO: Collecting logs for Windows node md-scale-lf795 in cluster md-scale-2d5o6c in namespace md-scale-rr13g7 Jan 7 21:30:34.787: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-lf795 to /logs/artifacts/clusters/md-scale-2d5o6c/machines/md-scale-2d5o6c-md-win-87ddf4d98-dhgwf/crashdumps.tar Jan 7 21:30:36.675: INFO: Collecting boot logs for AzureMachine md-scale-2d5o6c-md-win-lf795 Failed to get logs for machine md-scale-2d5o6c-md-win-87ddf4d98-dhgwf, cluster md-scale-rr13g7/md-scale-2d5o6c: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 7 21:30:37.493: INFO: Collecting logs for Windows node md-scale-5r8ck in cluster md-scale-2d5o6c in namespace md-scale-rr13g7 Jan 7 21:33:16.880: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-5r8ck to /logs/artifacts/clusters/md-scale-2d5o6c/machines/md-scale-2d5o6c-md-win-87ddf4d98-gzdbf/crashdumps.tar Jan 7 21:33:18.694: INFO: Collecting boot logs for AzureMachine md-scale-2d5o6c-md-win-5r8ck Failed to get logs for machine md-scale-2d5o6c-md-win-87ddf4d98-gzdbf, cluster md-scale-rr13g7/md-scale-2d5o6c: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-rr13g7/md-scale-2d5o6c kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 439.167089ms [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-ctjf5 [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-cpcqk [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-8c9k4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-nj7f8, container calico-node-startup ... skipping 10 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-fk7jt [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-bx7bg, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-cq8t4 [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-q8grl, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-jhzl6 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-2d5o6c-control-plane-s79dn [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-2d5o6c-control-plane-s79dn" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-bx7bg, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fk7jt, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-q8grl [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-sn2ll, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-7c9dv, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-bx7bg ... skipping 3 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-7c9dv [1mSTEP[0m: Dumping workload cluster md-scale-rr13g7/md-scale-2d5o6c Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-nj7f8 [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-cpcqk, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-nj7f8, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-2d5o6c-control-plane-s79dn [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-2d5o6c-control-plane-s79dn" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-2d5o6c-control-plane-s79dn [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-2d5o6c-control-plane-s79dn, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-2d5o6c-control-plane-s79dn" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-2d5o6c-control-plane-s79dn, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-8c9k4, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-2d5o6c-control-plane-s79dn [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-shsgv [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-2d5o6c-control-plane-s79dn" [1mSTEP[0m: Fetching activity logs took 5.108834128s [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-rr13g7" namespace [1mSTEP[0m: Deleting cluster md-scale-rr13g7/md-scale-2d5o6c [1mSTEP[0m: Deleting cluster md-scale-2d5o6c INFO: Waiting for the Cluster md-scale-rr13g7/md-scale-2d5o6c to be deleted [1mSTEP[0m: Waiting for cluster md-scale-2d5o6c to be deleted ... skipping 67 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-imi20z" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-mpjn35/node-drain-imi20z logs Jan 7 21:32:18.879: INFO: Collecting logs for Linux node node-drain-imi20z-control-plane-9gtwn in cluster node-drain-imi20z in namespace node-drain-mpjn35 Jan 7 21:38:52.435: INFO: Collecting boot logs for AzureMachine node-drain-imi20z-control-plane-9gtwn Failed to get logs for machine node-drain-imi20z-control-plane-42dqk, cluster node-drain-mpjn35/node-drain-imi20z: dialing public load balancer at node-drain-imi20z-f9fdf6f2.canadacentral.cloudapp.azure.com: dial tcp 20.175.217.100:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-mpjn35/node-drain-imi20z kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 388.047394ms [1mSTEP[0m: Dumping workload cluster node-drain-mpjn35/node-drain-imi20z Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-wlpk6, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-4cfcg, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fr558, container calico-node ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully set and use node drain timeout [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:183[0m A node should be forcefully removed if it cannot be drained in time [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.8/e2e/node_drain_timeout.go:83[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-08T01:05:30Z"} ++ early_exit_handler ++ '[' -n 157 ']' ++ kill -TERM 157 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-08T01:20:30Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-08T01:20:30Z"}