Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 7 succeeded |
Started | |
Elapsed | 4h15m |
Revision | release-1.6 |
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 606 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-796mv, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-45j7i5-control-plane-25wv6, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-l4xbq, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-796mv [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-fj72t, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-45j7i5-control-plane-25wv6 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-mhc-remediation-45j7i5-control-plane-25wv6" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-fjzs9, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-fj72t [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-mhc-remediation-45j7i5-control-plane-25wv6, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-fjzs9 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-mhc-remediation-45j7i5-control-plane-25wv6 [1mSTEP[0m: failed to find events of Pod "etcd-mhc-remediation-45j7i5-control-plane-25wv6" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-45j7i5-control-plane-25wv6, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-45j7i5-control-plane-25wv6 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-mhc-remediation-45j7i5-control-plane-25wv6" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-45j7i5-control-plane-25wv6, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-l4xbq [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-45j7i5-control-plane-25wv6 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-mhc-remediation-45j7i5-control-plane-25wv6" [1mSTEP[0m: Fetching activity logs took 2.943028538s [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-w2c41o" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-w2c41o/mhc-remediation-45j7i5 [1mSTEP[0m: Deleting cluster mhc-remediation-45j7i5 INFO: Waiting for the Cluster mhc-remediation-w2c41o/mhc-remediation-45j7i5 to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-45j7i5 to be deleted ... skipping 16 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Tue, 03 Jan 2023 21:11:41 UTC on Ginkgo node 2 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 3 21:11:41.191: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/03 21:11:41 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-h1vsg0" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-h1vsg0 --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 62 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 609.2959ms [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9msj9, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-h1vsg0-control-plane-dnvhp, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-9msj9 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-7gvqj [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-h1vsg0-control-plane-dnvhp [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-h1vsg0-control-plane-dnvhp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-h1vsg0-control-plane-dnvhp, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-zqgx5, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-h1vsg0-control-plane-dnvhp [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-h1vsg0-control-plane-dnvhp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-7gvqj, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-h1vsg0-control-plane-dnvhp, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-h1vsg0-control-plane-dnvhp [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-h1vsg0-control-plane-dnvhp" [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-h1vsg0 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-zqgx5 [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-h8pkq, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-zjkbt, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-zjkbt [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-mln65, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-mln65 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-d9mqm, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-h8pkq [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-h1vsg0-control-plane-dnvhp, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-h1vsg0-control-plane-dnvhp [1mSTEP[0m: failed to find events of Pod "etcd-self-hosted-h1vsg0-control-plane-dnvhp" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-d9mqm [1mSTEP[0m: Fetching activity logs took 1.847665581s Jan 3 21:22:48.125: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 3 21:22:48.498: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-h1vsg0 INFO: Waiting for the Cluster self-hosted/self-hosted-h1vsg0 to be deleted ... skipping 239 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-hvl6k, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-node-drain-nea9wl-control-plane-f79jj [1mSTEP[0m: Dumping workload cluster node-drain-25m744/node-drain-nea9wl Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-nea9wl-control-plane-f79jj, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-nea9wl-control-plane-nkdrq, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/etcd-node-drain-nea9wl-control-plane-f79jj [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-nea9wl-control-plane-f79jj, container kube-controller-manager: pods "node-drain-nea9wl-control-plane-f79jj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-nea9wl-control-plane-f79jj, container kube-scheduler: pods "node-drain-nea9wl-control-plane-f79jj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/etcd-node-drain-nea9wl-control-plane-f79jj, container etcd: pods "node-drain-nea9wl-control-plane-f79jj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-6tltd, container calico-node: pods "node-drain-nea9wl-control-plane-f79jj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-hvl6k, container kube-proxy: pods "node-drain-nea9wl-control-plane-f79jj" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-nea9wl-control-plane-f79jj, container kube-apiserver: pods "node-drain-nea9wl-control-plane-f79jj" not found [1mSTEP[0m: Fetching activity logs took 3.409565285s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-25m744" namespace [1mSTEP[0m: Deleting cluster node-drain-25m744/node-drain-nea9wl [1mSTEP[0m: Deleting cluster node-drain-nea9wl INFO: Waiting for the Cluster node-drain-25m744/node-drain-nea9wl to be deleted [1mSTEP[0m: Waiting for cluster node-drain-nea9wl to be deleted ... skipping 72 lines ... Jan 3 21:28:58.657: INFO: Collecting logs for Windows node quick-sta-xc8rn in cluster quick-start-q3n5bj in namespace quick-start-10j0ap Jan 3 21:31:36.343: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-xc8rn to /logs/artifacts/clusters/quick-start-q3n5bj/machines/quick-start-q3n5bj-md-win-75c578b658-jvxv7/crashdumps.tar Jan 3 21:31:39.327: INFO: Collecting boot logs for AzureMachine quick-start-q3n5bj-md-win-xc8rn Failed to get logs for machine quick-start-q3n5bj-md-win-75c578b658-jvxv7, cluster quick-start-10j0ap/quick-start-q3n5bj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 3 21:31:40.490: INFO: Collecting logs for Windows node quick-sta-p8gvz in cluster quick-start-q3n5bj in namespace quick-start-10j0ap Jan 3 21:34:16.401: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-p8gvz to /logs/artifacts/clusters/quick-start-q3n5bj/machines/quick-start-q3n5bj-md-win-75c578b658-lmncg/crashdumps.tar Jan 3 21:34:18.754: INFO: Collecting boot logs for AzureMachine quick-start-q3n5bj-md-win-p8gvz Failed to get logs for machine quick-start-q3n5bj-md-win-75c578b658-lmncg, cluster quick-start-10j0ap/quick-start-q3n5bj: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-10j0ap/quick-start-q3n5bj kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 670.432603ms [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-dx5dc, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-572hw, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-dx5dc [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-vfr7c ... skipping 5 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-8c6fl, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-8c6fl [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-5sn7j, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-5sn7j [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-rdmnb, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-quick-start-q3n5bj-control-plane-scdp4 [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-q3n5bj-control-plane-scdp4" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-q3n5bj-control-plane-scdp4, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-7lbmv, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-4wnlw, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-9q7dx, container kube-proxy [1mSTEP[0m: Dumping workload cluster quick-start-10j0ap/quick-start-q3n5bj Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-9q7dx [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-f7x2h, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-7lbmv [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-mvln8, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-f7x2h [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-gzr9m, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-gzr9m, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-q3n5bj-control-plane-scdp4 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-q3n5bj-control-plane-scdp4" [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-t26jq [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-4wnlw [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-q3n5bj-control-plane-scdp4, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-t26jq, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-q3n5bj-control-plane-scdp4, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-5ftt6 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-q3n5bj-control-plane-scdp4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-sfn66, container calico-node-felix [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-q3n5bj-control-plane-scdp4" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-sfn66 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-mvln8 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-q3n5bj-control-plane-scdp4 [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-q3n5bj-control-plane-scdp4" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-sfn66, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-gzr9m [1mSTEP[0m: Fetching activity logs took 4.624171708s [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-10j0ap" namespace [1mSTEP[0m: Deleting cluster quick-start-10j0ap/quick-start-q3n5bj [1mSTEP[0m: Deleting cluster quick-start-q3n5bj ... skipping 94 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-v99d6, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-qd67l [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-2w2gp [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-2w2gp, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-qd67l, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-7x7ey9-control-plane-j9wlp [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-7x7ey9-control-plane-j9wlp" [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-7x7ey9-control-plane-j9wlp, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/etcd-machine-pool-7x7ey9-control-plane-j9wlp [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-7x7ey9-control-plane-j9wlp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-7x7ey9-control-plane-j9wlp, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-7x7ey9-control-plane-j9wlp, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-n88wr [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-7x7ey9-control-plane-j9wlp [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-7x7ey9-control-plane-j9wlp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-n88wr, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-7x7ey9-control-plane-j9wlp [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-7x7ey9-control-plane-j9wlp" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-7x7ey9-control-plane-j9wlp, container kube-scheduler [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-bg4d5, container calico-node: pods "machine-pool-7x7ey9-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-vndwt, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-n88wr, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-vndwt, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-qd67l, container kube-proxy: pods "machine-pool-7x7ey9-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 2.218263991s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-67u8ab" namespace [1mSTEP[0m: Deleting cluster machine-pool-67u8ab/machine-pool-7x7ey9 [1mSTEP[0m: Deleting cluster machine-pool-7x7ey9 INFO: Waiting for the Cluster machine-pool-67u8ab/machine-pool-7x7ey9 to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-7x7ey9 to be deleted ... skipping 78 lines ... Jan 3 21:31:58.375: INFO: Collecting logs for Windows node md-scale-x7djv in cluster md-scale-09ddg4 in namespace md-scale-toyq1q Jan 3 21:34:31.194: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-x7djv to /logs/artifacts/clusters/md-scale-09ddg4/machines/md-scale-09ddg4-md-win-754ff7649-cgmbv/crashdumps.tar Jan 3 21:34:33.428: INFO: Collecting boot logs for AzureMachine md-scale-09ddg4-md-win-x7djv Failed to get logs for machine md-scale-09ddg4-md-win-754ff7649-cgmbv, cluster md-scale-toyq1q/md-scale-09ddg4: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 3 21:34:34.472: INFO: Collecting logs for Windows node md-scale-rsnjj in cluster md-scale-09ddg4 in namespace md-scale-toyq1q Jan 3 21:37:08.582: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-rsnjj to /logs/artifacts/clusters/md-scale-09ddg4/machines/md-scale-09ddg4-md-win-754ff7649-pm4nd/crashdumps.tar Jan 3 21:37:10.774: INFO: Collecting boot logs for AzureMachine md-scale-09ddg4-md-win-rsnjj Failed to get logs for machine md-scale-09ddg4-md-win-754ff7649-pm4nd, cluster md-scale-toyq1q/md-scale-09ddg4: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-toyq1q/md-scale-09ddg4 kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-km9gj, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-pz4ch, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-pz4ch [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-xhf8f, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-xhf8f ... skipping 18 lines ... [1mSTEP[0m: Dumping workload cluster md-scale-toyq1q/md-scale-09ddg4 Azure activity log [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-8lwgg [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-rjmdn, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-qqsgb [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-09ddg4-control-plane-664vg, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-09ddg4-control-plane-664vg [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-09ddg4-control-plane-664vg" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-09ddg4-control-plane-664vg [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-09ddg4-control-plane-664vg" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-09ddg4-control-plane-664vg, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-c8rmh, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-zxsht, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-ncj2f, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-c8rmh [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-09ddg4-control-plane-664vg [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-09ddg4-control-plane-664vg" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-09ddg4-control-plane-664vg, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-qqsgb, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-ncj2f [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-09ddg4-control-plane-664vg, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-09ddg4-control-plane-664vg [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-zxsht [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-09ddg4-control-plane-664vg" [1mSTEP[0m: Fetching activity logs took 6.917895202s [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-toyq1q" namespace [1mSTEP[0m: Deleting cluster md-scale-toyq1q/md-scale-09ddg4 [1mSTEP[0m: Deleting cluster md-scale-09ddg4 INFO: Waiting for the Cluster md-scale-toyq1q/md-scale-09ddg4 to be deleted [1mSTEP[0m: Waiting for cluster md-scale-09ddg4 to be deleted ... skipping 9 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45[0m Should successfully scale out and scale in a MachineDeployment [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:171[0m Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.6/e2e/md_scale.go:71[0m [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-04T01:02:54Z"} ++ early_exit_handler ++ '[' -n 157 ']' ++ kill -TERM 157 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 12 lines ... Cleaning up after docker Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker. All sensitive variables are redacted {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-04T01:17:54Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-04T01:17:54Z"}