Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 596 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 235.48796ms [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-7js9z [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-pkeb3r-control-plane-lcqp7, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-pkeb3r-control-plane-lcqp7, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-pkeb3r-control-plane-lcqp7 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-wtn2t, container calico-node [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-pkeb3r-control-plane-lcqp7" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-qnlhc [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kdrg2 [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-25xxq [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-25xxq, container calico-kube-controllers [1mSTEP[0m: Dumping workload cluster mhc-remediation-o8lslg/mhc-remediation-pkeb3r Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-pkeb3r-control-plane-lcqp7 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-pkeb3r-control-plane-lcqp7 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-pkeb3r-control-plane-lcqp7" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-pkeb3r-control-plane-lcqp7, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-pkeb3r-control-plane-lcqp7" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-pkeb3r-control-plane-lcqp7 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kdrg2, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-pkeb3r-control-plane-lcqp7" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-w6dxm, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-w6dxm [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-pkeb3r-control-plane-lcqp7, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-qnlhc, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-wtn2t [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-5vxbk, container coredns ... skipping 24 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Fri, 06 Jan 2023 21:14:56 UTC on Ginkgo node 7 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 6 21:14:56.757: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/06 21:14:56 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-6hj9do" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-6hj9do --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 66 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-dvrx7 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-zrzvv [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-bpddr [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-zrzvv, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-6hj9do-control-plane-2l9bj [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-6hj9do-control-plane-2l9bj, container etcd [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-6hj9do-control-plane-2l9bj" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-6hj9do-control-plane-2l9bj, container kube-apiserver [1mSTEP[0m: Fetching kube-system pod logs took 215.675252ms [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-6hj9do Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5mjvt, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-z246g, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-z7h8f [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-6hj9do-control-plane-2l9bj, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-6hj9do-control-plane-2l9bj, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-z246g [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-z7h8f, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-6hj9do-control-plane-2l9bj [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-6hj9do-control-plane-2l9bj" [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-6hj9do-control-plane-2l9bj [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-6hj9do-control-plane-2l9bj" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-6hj9do-control-plane-2l9bj [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-6hj9do-control-plane-2l9bj" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-5mjvt [1mSTEP[0m: Fetching activity logs took 1.848986498s Jan 6 21:24:45.565: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 6 21:24:45.879: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-6hj9do INFO: Waiting for the Cluster self-hosted/self-hosted-6hj9do to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-6hj9do to be deleted INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-86c9747485-z2ggp, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-69b8f8fdd4-qvqbw, container manager: http2: client connection lost Jan 6 21:29:26.112: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 6 21:29:26.148: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Jan 6 21:30:26.216: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 204 lines ... Jan 6 21:24:09.136: INFO: Collecting logs for Windows node quick-sta-mrpm5 in cluster quick-start-yocos2 in namespace quick-start-7t8uqx Jan 6 21:26:43.478: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-mrpm5 to /logs/artifacts/clusters/quick-start-yocos2/machines/quick-start-yocos2-md-win-6df5bbf584-f8w7x/crashdumps.tar Jan 6 21:26:45.239: INFO: Collecting boot logs for AzureMachine quick-start-yocos2-md-win-mrpm5 Failed to get logs for machine quick-start-yocos2-md-win-6df5bbf584-f8w7x, cluster quick-start-7t8uqx/quick-start-yocos2: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 6 21:26:45.986: INFO: Collecting logs for Windows node quick-sta-bsrmn in cluster quick-start-yocos2 in namespace quick-start-7t8uqx Jan 6 21:29:19.357: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-bsrmn to /logs/artifacts/clusters/quick-start-yocos2/machines/quick-start-yocos2-md-win-6df5bbf584-n6fs6/crashdumps.tar Jan 6 21:29:21.092: INFO: Collecting boot logs for AzureMachine quick-start-yocos2-md-win-bsrmn Failed to get logs for machine quick-start-yocos2-md-win-6df5bbf584-n6fs6, cluster quick-start-7t8uqx/quick-start-yocos2: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-7t8uqx/quick-start-yocos2 kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-68kk4 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-yocos2-control-plane-458zp [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-h4g9q, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-yocos2-control-plane-458zp, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-yocos2-control-plane-458zp" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-9xcgk [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-h4g9q [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-bc92c [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-c7jgq [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-c7jgq, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-njgbr ... skipping 14 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-yocos2-control-plane-458zp, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-4vvbk, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-yocos2-control-plane-458zp [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-78x8w [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-yocos2-control-plane-458zp, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-4gkrk [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-yocos2-control-plane-458zp" [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-8tck5, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-4vvbk [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kvhcg [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-yocos2-control-plane-458zp [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-yocos2-control-plane-458zp" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-yocos2-control-plane-458zp [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-yocos2-control-plane-458zp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4gkrk, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kvhcg, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-tpsn5, container calico-node-felix [1mSTEP[0m: Fetching kube-system pod logs took 397.131325ms [1mSTEP[0m: Dumping workload cluster quick-start-7t8uqx/quick-start-yocos2 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-9xcgk, container kube-proxy ... skipping 85 lines ... Jan 6 21:26:56.192: INFO: Collecting logs for Windows node md-scale-sfbdw in cluster md-scale-v1ab10 in namespace md-scale-t0kw3r Jan 6 21:29:24.893: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-sfbdw to /logs/artifacts/clusters/md-scale-v1ab10/machines/md-scale-v1ab10-md-win-88f574568-n2cvm/crashdumps.tar Jan 6 21:29:26.628: INFO: Collecting boot logs for AzureMachine md-scale-v1ab10-md-win-sfbdw Failed to get logs for machine md-scale-v1ab10-md-win-88f574568-n2cvm, cluster md-scale-t0kw3r/md-scale-v1ab10: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 6 21:29:27.386: INFO: Collecting logs for Windows node md-scale-q44jh in cluster md-scale-v1ab10 in namespace md-scale-t0kw3r Jan 6 21:32:00.130: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-q44jh to /logs/artifacts/clusters/md-scale-v1ab10/machines/md-scale-v1ab10-md-win-88f574568-tgz4v/crashdumps.tar Jan 6 21:32:01.824: INFO: Collecting boot logs for AzureMachine md-scale-v1ab10-md-win-q44jh Failed to get logs for machine md-scale-v1ab10-md-win-88f574568-tgz4v, cluster md-scale-t0kw3r/md-scale-v1ab10: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-t0kw3r/md-scale-v1ab10 kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-gck6q [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-58gq9, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-79499, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-v1ab10-control-plane-ck2r6 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-v1ab10-control-plane-ck2r6 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-v1ab10-control-plane-ck2r6" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-v1ab10-control-plane-ck2r6, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-v1ab10-control-plane-ck2r6" [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-7jw8l [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-dbx6h, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-58gq9, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-v1ab10-control-plane-ck2r6 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-v1ab10-control-plane-ck2r6" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-v1ab10-control-plane-ck2r6, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-mll8m [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-58gq9 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-4sk22 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-gck6q, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-dbx6h ... skipping 5 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 423.561831ms [1mSTEP[0m: Dumping workload cluster md-scale-t0kw3r/md-scale-v1ab10 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-7pqng [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-865fm, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-v1ab10-control-plane-ck2r6 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-v1ab10-control-plane-ck2r6, container etcd [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-v1ab10-control-plane-ck2r6" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-2tkdf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5nlb7, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-v1ab10-control-plane-ck2r6, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-6m6jx [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-mll8m, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-c2mpq, container csi-proxy ... skipping 105 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-u99t5v-control-plane-xcf6s [1mSTEP[0m: Dumping workload cluster machine-pool-3bgusn/machine-pool-u99t5v Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-tzdw8, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-4vd55 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-u99t5v-control-plane-xcf6s, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-s726p, container calico-node-startup [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-u99t5v-control-plane-xcf6s" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-u99t5v-control-plane-xcf6s, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-u99t5v-control-plane-xcf6s [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-u99t5v-control-plane-xcf6s" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-79df4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-nv96c, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-79df4, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-nv96c [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-s726p [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-u99t5v-control-plane-xcf6s, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-qvxw8 [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-u99t5v-control-plane-xcf6s" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-u99t5v-control-plane-xcf6s [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-u99t5v-control-plane-xcf6s" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-4vd55, container coredns [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-lnzl2, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-s726p, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-phgb9, container calico-node: pods "machine-pool-u99t5v-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-s726p, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-nv96c, container kube-proxy: pods "machine-pool-u99t5v-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 4.21624462s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-3bgusn" namespace [1mSTEP[0m: Deleting cluster machine-pool-3bgusn/machine-pool-u99t5v [1mSTEP[0m: Deleting cluster machine-pool-u99t5v INFO: Waiting for the Cluster machine-pool-3bgusn/machine-pool-u99t5v to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-u99t5v to be deleted ... skipping 67 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-tn7895" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-dgud23/node-drain-tn7895 logs Jan 6 21:32:24.815: INFO: Collecting logs for Linux node node-drain-tn7895-control-plane-z5lgk in cluster node-drain-tn7895 in namespace node-drain-dgud23 Jan 6 21:38:58.961: INFO: Collecting boot logs for AzureMachine node-drain-tn7895-control-plane-z5lgk Failed to get logs for machine node-drain-tn7895-control-plane-fb524, cluster node-drain-dgud23/node-drain-tn7895: dialing public load balancer at node-drain-tn7895-cfdeed4.eastus.cloudapp.azure.com: dial tcp 20.253.12.44:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-dgud23/node-drain-tn7895 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 370.984506ms [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-6nm7g [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-87cw7 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-node-drain-tn7895-control-plane-z5lgk, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-tn7895-control-plane-z5lgk, container kube-apiserver ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully set and use node drain timeout [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:183[0m A node should be forcefully removed if it cannot be drained in time [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.8/e2e/node_drain_timeout.go:83[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-07T01:04:38Z"} ++ early_exit_handler ++ '[' -n 163 ']' ++ kill -TERM 163 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerAll sensitive variables are redacted Program process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-07T01:19:38Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-07T01:19:38Z"}