Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.5 |
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 590 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-4sk9r [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-jkqt6 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-lbh93w-control-plane-0 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-4sk9r, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fhkxl, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-lbh93w-control-plane-0, container kube-scheduler [1mSTEP[0m: failed to find events of Pod "kube-apiserver-kcp-adoption-lbh93w-control-plane-0" [1mSTEP[0m: failed to find events of Pod "kube-scheduler-kcp-adoption-lbh93w-control-plane-0" [1mSTEP[0m: Fetching activity logs took 1.35788873s [1mSTEP[0m: Dumping all the Cluster API resources in the "kcp-adoption-lmg160" namespace [1mSTEP[0m: Deleting cluster kcp-adoption-lmg160/kcp-adoption-lbh93w [1mSTEP[0m: Deleting cluster kcp-adoption-lbh93w INFO: Waiting for the Cluster kcp-adoption-lmg160/kcp-adoption-lbh93w to be deleted [1mSTEP[0m: Waiting for cluster kcp-adoption-lbh93w to be deleted ... skipping 16 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Mon, 26 Dec 2022 17:03:19 UTC on Ginkgo node 2 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Dec 26 17:03:19.776: INFO: starting to create namespace for hosting the "self-hosted" test spec 2022/12/26 17:03:19 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-xdut9g" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-xdut9g --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 62 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-lk2mn [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-xdut9g-control-plane-xhpmh [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-lk2mn, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-22qlf, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-fl5kd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-q7p6d, container coredns [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-xdut9g-control-plane-xhpmh" [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-xdut9g-control-plane-xhpmh [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-87pj7, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-xdut9g-control-plane-xhpmh" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-q7p6d [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-87pj7 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-xdut9g-control-plane-xhpmh, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-fl5kd, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-xdut9g-control-plane-xhpmh [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-xdut9g-control-plane-xhpmh, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-xdut9g-control-plane-xhpmh" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-lzxh7 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-lzxh7, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-dgftd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-xdut9g-control-plane-xhpmh, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-22qlf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-dgftd, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-xdut9g-control-plane-xhpmh, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-xdut9g-control-plane-xhpmh [1mSTEP[0m: Fetching kube-system pod logs took 261.296936ms [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-xdut9g Azure activity log [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-xdut9g-control-plane-xhpmh" [1mSTEP[0m: Fetching activity logs took 1.432084054s Dec 26 17:12:37.035: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Dec 26 17:12:37.449: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-xdut9g INFO: Waiting for the Cluster self-hosted/self-hosted-xdut9g to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-xdut9g to be deleted INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-74b6b6b77f-45xdh, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6c76c59d6b-5x8ww, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-7df9bc44b4-5xzlg, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7574645774-r4qpc, container manager: http2: client connection lost Dec 26 17:17:07.651: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Dec 26 17:17:07.674: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Dec 26 17:17:39.918: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 71 lines ... [1mSTEP[0m: Dumping workload cluster mhc-remediation-819wq8/mhc-remediation-cfhlp7 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-ts9kz [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-qp8p4, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9q54j, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-cfhlp7-control-plane-msc4j [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-9q54j [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-cfhlp7-control-plane-msc4j" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-prtd2, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-hdq64, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-prtd2 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-qlfh7, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-qp8p4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-cfhlp7-control-plane-msc4j, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-qlfh7 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-cfhlp7-control-plane-msc4j [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-cfhlp7-control-plane-msc4j" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-cfhlp7-control-plane-msc4j, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-cfhlp7-control-plane-msc4j [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-cfhlp7-control-plane-msc4j, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cshx9, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cshx9 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-ts9kz, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-cfhlp7-control-plane-msc4j [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-cfhlp7-control-plane-msc4j" [1mSTEP[0m: Fetching activity logs took 1.513911452s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-819wq8" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-819wq8/mhc-remediation-cfhlp7 [1mSTEP[0m: Deleting cluster mhc-remediation-cfhlp7 INFO: Waiting for the Cluster mhc-remediation-819wq8/mhc-remediation-cfhlp7 to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-cfhlp7 to be deleted ... skipping 208 lines ... Dec 26 17:11:31.793: INFO: Collecting logs for Windows node quick-sta-5zpjv in cluster quick-start-afsh28 in namespace quick-start-r7gxt1 Dec 26 17:14:07.168: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-5zpjv to /logs/artifacts/clusters/quick-start-afsh28/machines/quick-start-afsh28-md-win-845cff6bd7-6v2kk/crashdumps.tar Dec 26 17:14:08.966: INFO: Collecting boot logs for AzureMachine quick-start-afsh28-md-win-5zpjv Failed to get logs for machine quick-start-afsh28-md-win-845cff6bd7-6v2kk, cluster quick-start-r7gxt1/quick-start-afsh28: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 26 17:14:09.874: INFO: Collecting logs for Windows node quick-sta-ff4wq in cluster quick-start-afsh28 in namespace quick-start-r7gxt1 Dec 26 17:16:43.267: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-ff4wq to /logs/artifacts/clusters/quick-start-afsh28/machines/quick-start-afsh28-md-win-845cff6bd7-xxvfd/crashdumps.tar Dec 26 17:16:45.180: INFO: Collecting boot logs for AzureMachine quick-start-afsh28-md-win-ff4wq Failed to get logs for machine quick-start-afsh28-md-win-845cff6bd7-xxvfd, cluster quick-start-r7gxt1/quick-start-afsh28: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-r7gxt1/quick-start-afsh28 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 442.086133ms [1mSTEP[0m: Dumping workload cluster quick-start-r7gxt1/quick-start-afsh28 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-h99f7 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-bgpbs, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-gx7d9, container calico-node-startup ... skipping 19 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-fvgp4, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-afsh28-control-plane-gx4jw, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-tmgp7, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-afsh28-control-plane-gx4jw [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-c4znz [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-afsh28-control-plane-gx4jw, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-afsh28-control-plane-gx4jw" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-afsh28-control-plane-gx4jw, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-9jmrb [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-slrcq, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-afsh28-control-plane-gx4jw [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-afsh28-control-plane-gx4jw" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-fvgp4, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-fvgp4 [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-afsh28-control-plane-gx4jw [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-afsh28-control-plane-gx4jw" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-79ddx [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-82snx, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-79ddx, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-h99f7, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-afsh28-control-plane-gx4jw [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-afsh28-control-plane-gx4jw" [1mSTEP[0m: Fetching activity logs took 1.987443226s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-r7gxt1" namespace [1mSTEP[0m: Deleting cluster quick-start-r7gxt1/quick-start-afsh28 [1mSTEP[0m: Deleting cluster quick-start-afsh28 INFO: Waiting for the Cluster quick-start-r7gxt1/quick-start-afsh28 to be deleted [1mSTEP[0m: Waiting for cluster quick-start-afsh28 to be deleted ... skipping 100 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-node-drain-o7j9hj-control-plane-klcnf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-o7j9hj-control-plane-jqh7p, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/etcd-node-drain-o7j9hj-control-plane-klcnf [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-o7j9hj-control-plane-jqh7p, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-node-drain-o7j9hj-control-plane-jqh7p [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-o7j9hj-control-plane-klcnf, container kube-controller-manager [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-bf6rs, container calico-node: pods "node-drain-o7j9hj-control-plane-klcnf" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-o7j9hj-control-plane-klcnf, container kube-scheduler: pods "node-drain-o7j9hj-control-plane-klcnf" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-o7j9hj-control-plane-klcnf, container kube-apiserver: pods "node-drain-o7j9hj-control-plane-klcnf" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-o7j9hj-control-plane-klcnf, container kube-controller-manager: pods "node-drain-o7j9hj-control-plane-klcnf" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-kl8w5, container kube-proxy: pods "node-drain-o7j9hj-control-plane-klcnf" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/etcd-node-drain-o7j9hj-control-plane-klcnf, container etcd: pods "node-drain-o7j9hj-control-plane-klcnf" not found [1mSTEP[0m: Fetching activity logs took 3.015671503s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-91e1zs" namespace [1mSTEP[0m: Deleting cluster node-drain-91e1zs/node-drain-o7j9hj [1mSTEP[0m: Deleting cluster node-drain-o7j9hj INFO: Waiting for the Cluster node-drain-91e1zs/node-drain-o7j9hj to be deleted [1mSTEP[0m: Waiting for cluster node-drain-o7j9hj to be deleted ... skipping 82 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-kwjwf [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-phcwd, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-wvbs9, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-vpepfh-control-plane-xbl8b [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-wvbs9 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-vpepfh-control-plane-xbl8b, container etcd [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-vpepfh-control-plane-xbl8b" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-jbq4v, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cbsx2, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-dnfvj, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-dnfvj [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-phcwd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4c8kg, container kube-proxy ... skipping 15 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-vpepfh-control-plane-xbl8b [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-qc6mc, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-6lw7q [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-2x4gj [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-vpepfh-control-plane-xbl8b [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cbsx2 [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-cbsx2, container kube-proxy: pods "machine-pool-vpepfh-mp-0000000" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-2x4gj, container calico-node: pods "machine-pool-vpepfh-mp-0000000" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-4c8kg, container kube-proxy: pods "machine-pool-vpepfh-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-d94dd, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-kwjwf, container calico-node: pods "machine-pool-vpepfh-mp-0000001" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-phcwd, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-wwlzh, container calico-node: pods "machine-pool-vpepfh-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-jbq4v, container kube-proxy: pods "machine-pool-vpepfh-mp-0000001" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-phcwd, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Fetching activity logs took 1.988804415s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-bd3ixc" namespace [1mSTEP[0m: Deleting cluster machine-pool-bd3ixc/machine-pool-vpepfh [1mSTEP[0m: Deleting cluster machine-pool-vpepfh INFO: Waiting for the Cluster machine-pool-bd3ixc/machine-pool-vpepfh to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-vpepfh to be deleted ... skipping 78 lines ... Dec 26 17:22:54.561: INFO: Collecting logs for Windows node md-scale-vcdtc in cluster md-scale-vchxtt in namespace md-scale-95z1t4 Dec 26 17:25:27.298: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-vcdtc to /logs/artifacts/clusters/md-scale-vchxtt/machines/md-scale-vchxtt-md-win-558f8dd57b-l5d4r/crashdumps.tar Dec 26 17:25:29.242: INFO: Collecting boot logs for AzureMachine md-scale-vchxtt-md-win-vcdtc Failed to get logs for machine md-scale-vchxtt-md-win-558f8dd57b-l5d4r, cluster md-scale-95z1t4/md-scale-vchxtt: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 26 17:25:29.975: INFO: Collecting logs for Windows node md-scale-pp9sh in cluster md-scale-vchxtt in namespace md-scale-95z1t4 Dec 26 17:28:03.718: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-pp9sh to /logs/artifacts/clusters/md-scale-vchxtt/machines/md-scale-vchxtt-md-win-558f8dd57b-zn4wd/crashdumps.tar Dec 26 17:28:05.615: INFO: Collecting boot logs for AzureMachine md-scale-vchxtt-md-win-pp9sh Failed to get logs for machine md-scale-vchxtt-md-win-558f8dd57b-zn4wd, cluster md-scale-95z1t4/md-scale-vchxtt: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-95z1t4/md-scale-vchxtt kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-2tkgc, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-vchxtt-control-plane-klqd8 [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-s9wzl, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-s9wzl [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-vchxtt-control-plane-klqd8, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-2tkgc, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-vchxtt-control-plane-klqd8 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-nsdqd, container calico-kube-controllers [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-vchxtt-control-plane-klqd8" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-vchxtt-control-plane-klqd8, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-2tkgc [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-574cf, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-nsdqd [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-d7kmh, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-vchxtt-control-plane-klqd8 ... skipping 43 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:183[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2022-12-26T20:54:35Z"} ++ early_exit_handler ++ '[' -n 162 ']' ++ kill -TERM 162 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-12-26T21:09:35Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-12-26T21:09:35Z"}