Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 6 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 595 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-5acebs-control-plane-g6sh8, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-pjn54, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-2gvqx [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-j4nm4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2gvqx, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-5acebs-control-plane-g6sh8 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-5acebs-control-plane-g6sh8" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-5acebs-control-plane-g6sh8, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-9xct5 [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-5acebs-control-plane-g6sh8 [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-pjn54 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-5acebs-control-plane-g6sh8" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-msst2, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-4nqpw, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-5acebs-control-plane-g6sh8, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-5acebs-control-plane-g6sh8 [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-5acebs-control-plane-g6sh8" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-5acebs-control-plane-g6sh8, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-5acebs-control-plane-g6sh8 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-5acebs-control-plane-g6sh8" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-q4d4r [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-j4nm4, container coredns [1mSTEP[0m: Fetching activity logs took 2.130283443s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-m06ekt" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-m06ekt/mhc-remediation-5acebs [1mSTEP[0m: Deleting cluster mhc-remediation-5acebs ... skipping 18 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Thu, 29 Dec 2022 21:13:59 UTC on Ginkgo node 8 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Dec 29 21:13:59.650: INFO: starting to create namespace for hosting the "self-hosted" test spec 2022/12/29 21:13:59 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-fbpui0" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-fbpui0 --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 70 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-2s6l7, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-fbpui0-control-plane-6kg6r, container etcd [1mSTEP[0m: Fetching kube-system pod logs took 273.429734ms [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-fbpui0 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-fbpui0-control-plane-6kg6r [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-k4xl2, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-fbpui0-control-plane-6kg6r" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-fbpui0-control-plane-6kg6r [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-fbpui0-control-plane-6kg6r" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-k4xl2 [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-fbpui0-control-plane-6kg6r [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-fbpui0-control-plane-6kg6r" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-fbpui0-control-plane-6kg6r, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-fbpui0-control-plane-6kg6r, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-fbpui0-control-plane-6kg6r, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-fbpui0-control-plane-6kg6r [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-fbpui0-control-plane-6kg6r" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-h7mkc, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-h7mkc [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-j8vjm, container calico-kube-controllers [1mSTEP[0m: Fetching activity logs took 1.70385244s Dec 29 21:24:45.944: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Dec 29 21:24:46.663: INFO: Deleting all clusters in the self-hosted namespace ... skipping 105 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-wbhsm, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-wbhsm [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-228nq, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-228nq [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-t4gv5, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-8lmjpa-control-plane-57xd8, container kube-controller-manager [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-qqswq, container kube-proxy: pods "node-drain-8lmjpa-control-plane-57xd8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-8lmjpa-control-plane-57xd8, container kube-scheduler: pods "node-drain-8lmjpa-control-plane-57xd8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-8lmjpa-control-plane-57xd8, container kube-controller-manager: pods "node-drain-8lmjpa-control-plane-57xd8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-wbhsm, container calico-node: pods "node-drain-8lmjpa-control-plane-57xd8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/etcd-node-drain-8lmjpa-control-plane-57xd8, container etcd: pods "node-drain-8lmjpa-control-plane-57xd8" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-8lmjpa-control-plane-57xd8, container kube-apiserver: pods "node-drain-8lmjpa-control-plane-57xd8" not found [1mSTEP[0m: Fetching activity logs took 4.91657171s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-l8yztr" namespace [1mSTEP[0m: Deleting cluster node-drain-l8yztr/node-drain-8lmjpa [1mSTEP[0m: Deleting cluster node-drain-8lmjpa INFO: Waiting for the Cluster node-drain-l8yztr/node-drain-8lmjpa to be deleted [1mSTEP[0m: Waiting for cluster node-drain-8lmjpa to be deleted ... skipping 89 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-vnx7n, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-2fa6vm-control-plane-qwzxb, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-2fa6vm-control-plane-qwzxb [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-vnx7n [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-2fa6vm-control-plane-qwzxb [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-2fa6vm-control-plane-qwzxb, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-2fa6vm-control-plane-qwzxb" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-gx47c, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-k7qms, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-gx47c [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-26bd4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-26bd4, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-k7qms [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-jtjjl, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-prb7g [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-prb7g, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-2fa6vm-control-plane-qwzxb, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-2fa6vm-control-plane-qwzxb, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-jtjjl [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-r6n85, container calico-node: pods "machine-pool-2fa6vm-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-26bd4, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-gx47c, container kube-proxy: pods "machine-pool-2fa6vm-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-26bd4, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-prb7g, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Fetching activity logs took 2.206226568s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-s6s5jh" namespace [1mSTEP[0m: Deleting cluster machine-pool-s6s5jh/machine-pool-2fa6vm [1mSTEP[0m: Deleting cluster machine-pool-2fa6vm INFO: Waiting for the Cluster machine-pool-s6s5jh/machine-pool-2fa6vm to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-2fa6vm to be deleted ... skipping 208 lines ... Dec 29 21:24:56.625: INFO: Collecting logs for Windows node quick-sta-cc22v in cluster quick-start-pq57sj in namespace quick-start-x6th8v Dec 29 21:27:36.111: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-cc22v to /logs/artifacts/clusters/quick-start-pq57sj/machines/quick-start-pq57sj-md-win-5bcd94775-5m8kr/crashdumps.tar Dec 29 21:27:37.818: INFO: Collecting boot logs for AzureMachine quick-start-pq57sj-md-win-cc22v Failed to get logs for machine quick-start-pq57sj-md-win-5bcd94775-5m8kr, cluster quick-start-x6th8v/quick-start-pq57sj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 29 21:27:38.717: INFO: Collecting logs for Windows node quick-sta-4v2h4 in cluster quick-start-pq57sj in namespace quick-start-x6th8v Dec 29 21:30:13.467: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-4v2h4 to /logs/artifacts/clusters/quick-start-pq57sj/machines/quick-start-pq57sj-md-win-5bcd94775-n9wzh/crashdumps.tar Dec 29 21:30:15.088: INFO: Collecting boot logs for AzureMachine quick-start-pq57sj-md-win-4v2h4 Failed to get logs for machine quick-start-pq57sj-md-win-5bcd94775-n9wzh, cluster quick-start-x6th8v/quick-start-pq57sj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-x6th8v/quick-start-pq57sj kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 391.559871ms [1mSTEP[0m: Dumping workload cluster quick-start-x6th8v/quick-start-pq57sj Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-zwstk [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-pq57sj-control-plane-dnl2d [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-pq57sj-control-plane-dnl2d, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-dp92q, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-znmmt, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-wlnmq [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-zgbp4, container calico-node [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-pq57sj-control-plane-dnl2d" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-pq57sj-control-plane-dnl2d [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-pq57sj-control-plane-dnl2d" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-pq57sj-control-plane-dnl2d, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-dp92q [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-877mb, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-k667w, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-dcrqr [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-pq57sj-control-plane-dnl2d, container kube-controller-manager ... skipping 4 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-p2j7b, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-p2j7b [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-zgbp4 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-pq57sj-control-plane-dnl2d, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-pq57sj-control-plane-dnl2d [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-hn82j, container containerd-logger [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-pq57sj-control-plane-dnl2d" [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-9xmpg [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-pq57sj-control-plane-dnl2d [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-pq57sj-control-plane-dnl2d" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-xpwdp, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-ff6gz [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-9pjnr [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-ff6gz, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-9pjnr, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-xpwdp ... skipping 92 lines ... Dec 29 21:26:32.977: INFO: Collecting logs for Windows node md-scale-g9p79 in cluster md-scale-ud4z08 in namespace md-scale-n11naf Dec 29 21:29:08.191: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-g9p79 to /logs/artifacts/clusters/md-scale-ud4z08/machines/md-scale-ud4z08-md-win-678bd84744-9thtz/crashdumps.tar Dec 29 21:29:09.860: INFO: Collecting boot logs for AzureMachine md-scale-ud4z08-md-win-g9p79 Failed to get logs for machine md-scale-ud4z08-md-win-678bd84744-9thtz, cluster md-scale-n11naf/md-scale-ud4z08: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Dec 29 21:29:10.754: INFO: Collecting logs for Windows node md-scale-vc7jn in cluster md-scale-ud4z08 in namespace md-scale-n11naf Dec 29 21:31:47.155: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-vc7jn to /logs/artifacts/clusters/md-scale-ud4z08/machines/md-scale-ud4z08-md-win-678bd84744-zmgxw/crashdumps.tar Dec 29 21:31:48.778: INFO: Collecting boot logs for AzureMachine md-scale-ud4z08-md-win-vc7jn Failed to get logs for machine md-scale-ud4z08-md-win-678bd84744-zmgxw, cluster md-scale-n11naf/md-scale-ud4z08: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-n11naf/md-scale-ud4z08 kube-system pod logs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-b5tzd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-ud4z08-control-plane-6jt9h [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-gbdp2, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-9qvz8, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-ud4z08-control-plane-6jt9h, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-ud4z08-control-plane-6jt9h" [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-gqnr9, container csi-proxy [1mSTEP[0m: Fetching kube-system pod logs took 408.904995ms [1mSTEP[0m: Dumping workload cluster md-scale-n11naf/md-scale-ud4z08 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-ud4z08-control-plane-6jt9h [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-ud4z08-control-plane-6jt9h" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5p9hw, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-s9ntm, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-gbdp2, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-9qvz8 [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-s9ntm [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-ud4z08-control-plane-6jt9h, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-5nv6c [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-ud4z08-control-plane-6jt9h [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-ud4z08-control-plane-6jt9h, container etcd [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-ud4z08-control-plane-6jt9h" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5nv6c, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-z6nfk, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-x6r9g [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-ftcfr, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-vwgzt, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-b5tzd, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-z6nfk [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-5p9hw [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-ud4z08-control-plane-6jt9h, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-ud4z08-control-plane-6jt9h [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-nmgqp, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-2rlkh, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-x6r9g, container containerd-logger [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-ud4z08-control-plane-6jt9h" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-vwgzt, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-ftcfr [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-nmgqp [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-xl6tm, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-gqnr9 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-vwgzt ... skipping 20 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:171[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.6/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2022-12-30T01:02:43Z"} ++ early_exit_handler ++ '[' -n 163 ']' ++ kill -TERM 163 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-12-30T01:17:43Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-12-30T01:17:43Z"}