Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.5 |
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [REQUIRED] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 578 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-r798t [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-fjpxg [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-kbb49, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-8d4xc [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-r798t, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-2qrr8w-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-kcp-adoption-2qrr8w-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-ntw7h, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-kcp-adoption-2qrr8w-control-plane-0, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-kbb49 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-2qrr8w-control-plane-0 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-2qrr8w-control-plane-0, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-ntw7h [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fjpxg, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-apiserver-kcp-adoption-2qrr8w-control-plane-0" [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-2qrr8w-control-plane-0 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-8d4xc, container coredns [1mSTEP[0m: failed to find events of Pod "kube-scheduler-kcp-adoption-2qrr8w-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-2qrr8w-control-plane-0, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/etcd-kcp-adoption-2qrr8w-control-plane-0 [1mSTEP[0m: failed to find events of Pod "etcd-kcp-adoption-2qrr8w-control-plane-0" [1mSTEP[0m: Fetching activity logs took 1.296353533s [1mSTEP[0m: Dumping all the Cluster API resources in the "kcp-adoption-wamxht" namespace [1mSTEP[0m: Deleting cluster kcp-adoption-wamxht/kcp-adoption-2qrr8w [1mSTEP[0m: Deleting cluster kcp-adoption-2qrr8w INFO: Waiting for the Cluster kcp-adoption-wamxht/kcp-adoption-2qrr8w to be deleted [1mSTEP[0m: Waiting for cluster kcp-adoption-2qrr8w to be deleted ... skipping 75 lines ... [1mSTEP[0m: Dumping workload cluster mhc-remediation-m73onr/mhc-remediation-6eshny Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-qh5qq, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-6eshny-control-plane-62v4r, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-wfw9z, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-wfw9z [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-6eshny-control-plane-62v4r [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-6eshny-control-plane-62v4r" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-g9nld, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-6eshny-control-plane-62v4r, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-6eshny-control-plane-62v4r [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-6eshny-control-plane-62v4r, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-6eshny-control-plane-62v4r" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-6eshny-control-plane-62v4r [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-6eshny-control-plane-62v4r" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-h4lbg, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-h4lbg [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-k562t, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-k562t [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-6eshny-control-plane-62v4r, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-6eshny-control-plane-62v4r [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-6eshny-control-plane-62v4r" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-dqxtt [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-pz6zv, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-pz6zv [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-dqxtt, container coredns [1mSTEP[0m: Fetching activity logs took 1.755026009s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-m73onr" namespace ... skipping 20 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sat, 31 Dec 2022 17:03:20 UTC on Ginkgo node 8 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Dec 31 17:03:20.501: INFO: starting to create namespace for hosting the "self-hosted" test spec 2022/12/31 17:03:20 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-vdgq4m" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-vdgq4m --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 63 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-2k2qn [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-tckzz, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-vdgq4m-control-plane-rbfzd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-vdgq4m-control-plane-rbfzd, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-vdgq4m-control-plane-rbfzd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-8fb2x, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-vdgq4m-control-plane-rbfzd" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-tckzz [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-vdgq4m-control-plane-rbfzd, container etcd [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-vdgq4m-control-plane-rbfzd" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kspc4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kspc4, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-vdgq4m-control-plane-rbfzd, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-8fb2x [1mSTEP[0m: Fetching kube-system pod logs took 260.488078ms [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-vdgq4m Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-gtrh5, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-vdgq4m-control-plane-rbfzd [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-vdgq4m-control-plane-rbfzd" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-gtrh5 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-vdgq4m-control-plane-rbfzd, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-vdgq4m-control-plane-rbfzd [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-cqf2k [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-vdgq4m-control-plane-rbfzd" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-j7h8t, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-j7h8t [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-2k2qn, container coredns [1mSTEP[0m: Fetching activity logs took 1.746733562s Dec 31 17:12:17.741: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Dec 31 17:12:18.076: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-vdgq4m INFO: Waiting for the Cluster self-hosted/self-hosted-vdgq4m to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-vdgq4m to be deleted INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6c76c59d6b-44gd8, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-6cf5494777-hwzf6, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-74b6b6b77f-scvmx, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-7df9bc44b4-zdxbm, container manager: http2: client connection lost Dec 31 17:16:48.270: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Dec 31 17:16:48.291: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Dec 31 17:17:20.366: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 220 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-kq5dw [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-f755ok-control-plane-9nxn4, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-82bpm, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-lwj5j, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-f755ok-control-plane-9nxn4, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-f755ok-control-plane-9nxn4 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-f755ok-control-plane-9nxn4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-7m5s7, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-f755ok-control-plane-9nxn4 [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-f755ok-control-plane-9nxn4" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-7m5s7 [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-f755ok-control-plane-9nxn4 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-f755ok-control-plane-9nxn4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-wd76t, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-wd76t [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-lwj5j [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-f755ok-control-plane-9nxn4, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-7gcgf [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-kj6fg [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-f755ok-control-plane-9nxn4 [1mSTEP[0m: Fetching kube-system pod logs took 411.945067ms [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-f755ok-control-plane-9nxn4" [1mSTEP[0m: Dumping workload cluster machine-pool-nnn9lf/machine-pool-f755ok Azure activity log [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-7gcgf, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-lwj5j, container kube-proxy: pods "machine-pool-f755ok-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-7gcgf, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-wd76t, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-kj6fg, container calico-node: pods "machine-pool-f755ok-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 1.669326505s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-nnn9lf" namespace [1mSTEP[0m: Deleting cluster machine-pool-nnn9lf/machine-pool-f755ok [1mSTEP[0m: Deleting cluster machine-pool-f755ok INFO: Waiting for the Cluster machine-pool-nnn9lf/machine-pool-f755ok to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-f755ok to be deleted ... skipping 72 lines ... Dec 31 17:12:24.356: INFO: Collecting logs for Windows node quick-sta-kdc4x in cluster quick-start-uc605f in namespace quick-start-9jd2i4 Dec 31 17:15:03.506: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-kdc4x to /logs/artifacts/clusters/quick-start-uc605f/machines/quick-start-uc605f-md-win-8d66c9854-67wsm/crashdumps.tar Dec 31 17:15:05.373: INFO: Collecting boot logs for AzureMachine quick-start-uc605f-md-win-kdc4x Failed to get logs for machine quick-start-uc605f-md-win-8d66c9854-67wsm, cluster quick-start-9jd2i4/quick-start-uc605f: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 31 17:15:06.189: INFO: Collecting logs for Windows node quick-sta-mlspb in cluster quick-start-uc605f in namespace quick-start-9jd2i4 Dec 31 17:17:44.828: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-mlspb to /logs/artifacts/clusters/quick-start-uc605f/machines/quick-start-uc605f-md-win-8d66c9854-xxkkt/crashdumps.tar Dec 31 17:17:46.905: INFO: Collecting boot logs for AzureMachine quick-start-uc605f-md-win-mlspb Failed to get logs for machine quick-start-uc605f-md-win-8d66c9854-xxkkt, cluster quick-start-9jd2i4/quick-start-uc605f: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-9jd2i4/quick-start-uc605f kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-mnp2x, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-uc605f-control-plane-844wx, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-mnp2x [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-prjj9 [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-t7bxt ... skipping 2 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-sjcr7, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-7kq7m [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-l847t, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-v69fj, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-vvm5r, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-uc605f-control-plane-844wx [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-uc605f-control-plane-844wx" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-uc605f-control-plane-844wx, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-sjcr7 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-l847t [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-prjj9, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-mnp2x, container calico-node-startup [1mSTEP[0m: Fetching kube-system pod logs took 539.598789ms [1mSTEP[0m: Dumping workload cluster quick-start-9jd2i4/quick-start-uc605f Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-v69fj [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-vvm5r [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-vvm5r, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-fxbq6, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-uc605f-control-plane-844wx [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-uc605f-control-plane-844wx" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-m4qp9, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-uc605f-control-plane-844wx [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-lzlb4 [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-hdtkr [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-qcg78, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-m4qp9 ... skipping 95 lines ... Dec 31 17:15:42.196: INFO: Collecting logs for Windows node md-scale-226lg in cluster md-scale-vc6mat in namespace md-scale-dwgi8f Dec 31 17:18:20.266: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-226lg to /logs/artifacts/clusters/md-scale-vc6mat/machines/md-scale-vc6mat-md-win-54d7bfccd8-klhp4/crashdumps.tar Dec 31 17:18:22.175: INFO: Collecting boot logs for AzureMachine md-scale-vc6mat-md-win-226lg Failed to get logs for machine md-scale-vc6mat-md-win-54d7bfccd8-klhp4, cluster md-scale-dwgi8f/md-scale-vc6mat: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 31 17:18:23.147: INFO: Collecting logs for Windows node md-scale-9rxb2 in cluster md-scale-vc6mat in namespace md-scale-dwgi8f Dec 31 17:21:02.976: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-9rxb2 to /logs/artifacts/clusters/md-scale-vc6mat/machines/md-scale-vc6mat-md-win-54d7bfccd8-sdm97/crashdumps.tar Dec 31 17:21:04.848: INFO: Collecting boot logs for AzureMachine md-scale-vc6mat-md-win-9rxb2 Failed to get logs for machine md-scale-vc6mat-md-win-54d7bfccd8-sdm97, cluster md-scale-dwgi8f/md-scale-vc6mat: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-dwgi8f/md-scale-vc6mat kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 401.067559ms [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-mlq7x, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-2gtt9 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-v7wg8 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-dkj7q, container calico-node-startup ... skipping 16 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-mlq7x [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-8nrvm [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-vc6mat-control-plane-7ktdh, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-vc6mat-control-plane-7ktdh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-wns5x, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-5vlt4 [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-vc6mat-control-plane-7ktdh" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-pbntr [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-vc6mat-control-plane-7ktdh, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-wns5x [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-vc6mat-control-plane-7ktdh [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-vc6mat-control-plane-7ktdh [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-vc6mat-control-plane-7ktdh" [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-vc6mat-control-plane-7ktdh" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-r9znp, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-vc6mat-control-plane-7ktdh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-vc6mat-control-plane-7ktdh, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-vc6mat-control-plane-7ktdh" [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-hbxgb [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-hbxgb, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-x9sh8 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-v7wg8, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-2gtt9, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-vc6mat-control-plane-7ktdh, container kube-apiserver ... skipping 74 lines ... [1mSTEP[0m: Dumping logs from the "node-drain-p8ye3k" workload cluster [1mSTEP[0m: Dumping workload cluster node-drain-w98bqa/node-drain-p8ye3k logs Dec 31 17:19:19.505: INFO: Collecting logs for Linux node node-drain-p8ye3k-control-plane-5lx2j in cluster node-drain-p8ye3k in namespace node-drain-w98bqa Dec 31 17:25:54.158: INFO: Collecting boot logs for AzureMachine node-drain-p8ye3k-control-plane-5lx2j Failed to get logs for machine node-drain-p8ye3k-control-plane-7zqt4, cluster node-drain-w98bqa/node-drain-p8ye3k: dialing public load balancer at node-drain-p8ye3k-41957eee.canadacentral.cloudapp.azure.com: dial tcp 20.220.154.234:22: connect: connection timed out [1mSTEP[0m: Dumping workload cluster node-drain-w98bqa/node-drain-p8ye3k kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 421.340715ms [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-p8ye3k-control-plane-5lx2j, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-node-drain-p8ye3k-control-plane-5lx2j [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-2bhpf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-2bhpf, container kube-proxy ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully set and use node drain timeout [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:195[0m A node should be forcefully removed if it cannot be drained in time [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/node_drain_timeout.go:83[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2022-12-31T20:54:47Z"} ++ early_exit_handler ++ '[' -n 160 ']' ++ kill -TERM 160 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-12-31T21:09:48Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-12-31T21:09:48Z"}