Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 596 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 278.051176ms [1mSTEP[0m: Dumping workload cluster mhc-remediation-o2nq5x/mhc-remediation-0avxxm Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-zmh5n, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-0avxxm-control-plane-dzl4v, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-4d45s, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-0avxxm-control-plane-dzl4v [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-0avxxm-control-plane-dzl4v" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-mk96v, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-0avxxm-control-plane-dzl4v, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-0avxxm-control-plane-dzl4v [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-0avxxm-control-plane-dzl4v" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-m89vm, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-szxvx, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-0avxxm-control-plane-dzl4v [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-0avxxm-control-plane-dzl4v" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-szxvx [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-0avxxm-control-plane-dzl4v [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-0avxxm-control-plane-dzl4v, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-0avxxm-control-plane-dzl4v" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-0avxxm-control-plane-dzl4v, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-m89vm [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-rn9tc [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-w2nwq, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-rn9tc, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-zmh5n ... skipping 23 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Mon, 26 Dec 2022 21:10:41 UTC on Ginkgo node 10 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Dec 26 21:10:41.134: INFO: starting to create namespace for hosting the "self-hosted" test spec 2022/12/26 21:10:41 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-oopvzp" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-oopvzp --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 61 lines ... [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-oopvzp kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 253.019692ms [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5rbln, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-oopvzp-control-plane-vjkb4 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-oopvzp-control-plane-vjkb4 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-oopvzp-control-plane-vjkb4, container etcd [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-oopvzp-control-plane-vjkb4" [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-oopvzp Azure activity log [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-oopvzp-control-plane-vjkb4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-oopvzp-control-plane-vjkb4, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-rqmbc, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-rqmbc [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-7vsf7, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-chkj9 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-chkj9, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-5rbln [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-oopvzp-control-plane-vjkb4 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-oopvzp-control-plane-vjkb4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-qx85j, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-oopvzp-control-plane-vjkb4, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-7vsf7 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-oopvzp-control-plane-vjkb4, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-qx85j [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-85kth, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-s5rp2 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-85kth [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-s5rp2, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-oopvzp-control-plane-vjkb4 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-oopvzp-control-plane-vjkb4" [1mSTEP[0m: Fetching activity logs took 2.833920198s Dec 26 21:19:51.729: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Dec 26 21:19:52.056: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-oopvzp INFO: Waiting for the Cluster self-hosted/self-hosted-oopvzp to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-oopvzp to be deleted INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-558bffb98f-mbtqj, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-5b6d47468d-mrhhm, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8f6f78b8b-hbxcp, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-66968bb4c5-shnpk, container manager: http2: client connection lost Dec 26 21:24:32.264: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Dec 26 21:24:32.281: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Dec 26 21:25:01.329: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 204 lines ... Dec 26 21:19:58.078: INFO: Collecting logs for Windows node quick-sta-swtvm in cluster quick-start-0lwd69 in namespace quick-start-b6z5qq Dec 26 21:22:37.507: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-swtvm to /logs/artifacts/clusters/quick-start-0lwd69/machines/quick-start-0lwd69-md-win-c48c856f9-8dwpk/crashdumps.tar Dec 26 21:22:39.375: INFO: Collecting boot logs for AzureMachine quick-start-0lwd69-md-win-swtvm Failed to get logs for machine quick-start-0lwd69-md-win-c48c856f9-8dwpk, cluster quick-start-b6z5qq/quick-start-0lwd69: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 26 21:22:40.327: INFO: Collecting logs for Windows node quick-sta-vkdnj in cluster quick-start-0lwd69 in namespace quick-start-b6z5qq Dec 26 21:25:22.167: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-vkdnj to /logs/artifacts/clusters/quick-start-0lwd69/machines/quick-start-0lwd69-md-win-c48c856f9-kzffz/crashdumps.tar Dec 26 21:25:24.002: INFO: Collecting boot logs for AzureMachine quick-start-0lwd69-md-win-vkdnj Failed to get logs for machine quick-start-0lwd69-md-win-c48c856f9-kzffz, cluster quick-start-b6z5qq/quick-start-0lwd69: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-b6z5qq/quick-start-0lwd69 kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-vv947 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-0lwd69-control-plane-x5qwc [1mSTEP[0m: Fetching kube-system pod logs took 448.114098ms [1mSTEP[0m: Dumping workload cluster quick-start-b6z5qq/quick-start-0lwd69 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-957qf, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-qffsg, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-957qf, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-fmgtc [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-0lwd69-control-plane-x5qwc" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-bpwjh, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-vv947, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-gns7l [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-957qf [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-h5lm8, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-0lwd69-control-plane-x5qwc, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-nr7px [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-m4g9t, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-blfr4 [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-qffsg [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-nr7px, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-bpwjh [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-0lwd69-control-plane-x5qwc [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-0lwd69-control-plane-x5qwc" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-0lwd69-control-plane-x5qwc [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-0lwd69-control-plane-x5qwc" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-0lwd69-control-plane-x5qwc, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-h5lm8, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-476bj, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-m4g9t [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-0lwd69-control-plane-x5qwc, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-blfr4, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-phj6p [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-476bj [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-phj6p, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-0lwd69-control-plane-x5qwc, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-0lwd69-control-plane-x5qwc [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-0lwd69-control-plane-x5qwc" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-gns7l, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-ckwrc [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-fmgtc, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-847gm [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-qp9hl, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-h5lm8 ... skipping 90 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-p8xp2, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-kbzln, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-8hcdtj-control-plane-zdl6m [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-8hcdtj-control-plane-zdl6m [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-8hcdtj-control-plane-zdl6m, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-8hcdtj-control-plane-zdl6m, container etcd [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-8hcdtj-control-plane-zdl6m" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-8hcdtj-control-plane-zdl6m [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-c6b6p, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-8hcdtj-control-plane-zdl6m, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-8hcdtj-control-plane-zdl6m" [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-8hcdtj-control-plane-zdl6m [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-8hcdtj-control-plane-zdl6m" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-c6b6p [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cfjmx, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-jj697, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cfjmx [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-kbzln [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-jq48z, container kube-proxy ... skipping 4 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-p9w9k [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-tkjf4, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-56ph9 [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-tkjf4 [1mSTEP[0m: Fetching kube-system pod logs took 396.410678ms [1mSTEP[0m: Dumping workload cluster machine-pool-od4zsz/machine-pool-8hcdtj Azure activity log [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-jq48z, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-cfjmx, container kube-proxy: pods "machine-pool-8hcdtj-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-jj697, container calico-node: pods "machine-pool-8hcdtj-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-p9w9k, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-p9w9k, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Fetching activity logs took 2.7432021s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-od4zsz" namespace [1mSTEP[0m: Deleting cluster machine-pool-od4zsz/machine-pool-8hcdtj [1mSTEP[0m: Deleting cluster machine-pool-8hcdtj INFO: Waiting for the Cluster machine-pool-od4zsz/machine-pool-8hcdtj to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-8hcdtj to be deleted ... skipping 90 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-6csbdx-control-plane-fwr8r, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-node-drain-6csbdx-control-plane-fwr8r [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-node-drain-6csbdx-control-plane-fwr8r [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-s7dnf [1mSTEP[0m: Collecting events for Pod kube-system/etcd-node-drain-6csbdx-control-plane-fwr8r [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-node-drain-6csbdx-control-plane-fwr8r, container etcd [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-6csbdx-control-plane-hbpbn, container kube-controller-manager: pods "kube-controller-manager-node-drain-6csbdx-control-plane-hbpbn" not found [1mSTEP[0m: Fetching activity logs took 3.286817468s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-dqgo40" namespace [1mSTEP[0m: Deleting cluster node-drain-dqgo40/node-drain-6csbdx [1mSTEP[0m: Deleting cluster node-drain-6csbdx INFO: Waiting for the Cluster node-drain-dqgo40/node-drain-6csbdx to be deleted [1mSTEP[0m: Waiting for cluster node-drain-6csbdx to be deleted ... skipping 78 lines ... Dec 26 21:23:21.310: INFO: Collecting logs for Windows node md-scale-dwq6d in cluster md-scale-322d2h in namespace md-scale-8a29cg Dec 26 21:25:57.797: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-dwq6d to /logs/artifacts/clusters/md-scale-322d2h/machines/md-scale-322d2h-md-win-746648657d-b7pdp/crashdumps.tar Dec 26 21:25:59.572: INFO: Collecting boot logs for AzureMachine md-scale-322d2h-md-win-dwq6d Failed to get logs for machine md-scale-322d2h-md-win-746648657d-b7pdp, cluster md-scale-8a29cg/md-scale-322d2h: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 26 21:26:00.477: INFO: Collecting logs for Windows node md-scale-gkzg6 in cluster md-scale-322d2h in namespace md-scale-8a29cg Dec 26 21:28:41.420: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-gkzg6 to /logs/artifacts/clusters/md-scale-322d2h/machines/md-scale-322d2h-md-win-746648657d-mzv8z/crashdumps.tar Dec 26 21:28:43.333: INFO: Collecting boot logs for AzureMachine md-scale-322d2h-md-win-gkzg6 Failed to get logs for machine md-scale-322d2h-md-win-746648657d-mzv8z, cluster md-scale-8a29cg/md-scale-322d2h: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-8a29cg/md-scale-322d2h kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 432.303855ms [1mSTEP[0m: Dumping workload cluster md-scale-8a29cg/md-scale-322d2h Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-hxh7z, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-322d2h-control-plane-g657x [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-q6dpj [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-wwqgx, container calico-node-startup [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-322d2h-control-plane-g657x" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-wwqgx [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-322d2h-control-plane-g657x [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-j4dx5, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-nzbzk [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-322d2h-control-plane-g657x" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-322d2h-control-plane-g657x, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-nzbzk, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-q6dpj, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-hxh7z [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-322d2h-control-plane-g657x [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-mrm9h, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-322d2h-control-plane-g657x" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-322d2h-control-plane-g657x, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-j4dx5 [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-jrmdc, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-746qf, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-322d2h-control-plane-g657x [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-322d2h-control-plane-g657x" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-322d2h-control-plane-g657x, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-8m9z5, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-mrm9h [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-8m9z5 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-cdhvv, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-74jvw, container csi-proxy ... skipping 30 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:171[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.6/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2022-12-27T01:02:34Z"} ++ early_exit_handler ++ '[' -n 164 ']' ++ kill -TERM 164 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-12-27T01:17:34Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-12-27T01:17:34Z"}