This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 27 succeeded
Started2023-01-21 21:11
Elapsed45m59s
Revisionrelease-1.7

Test Failures


capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields 34m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sMachineDeployment\srollout\sspec\sShould\ssuccessfully\supgrade\sMachines\supon\schanges\sin\srelevant\sMachineDeployment\sfields$'
[FAILED] Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc00129cf90>: {
        Op: "Get",
        URL: "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <http.tlsHandshakeTimeoutError>{},
    }
    Get "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout
occurred
In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/21/23 21:53:54.651

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files


Show 27 Passed Tests

Show 18 Skipped Tests

Error lines from build-log.txt

... skipping 784 lines ...
  Jan 21 21:31:48.611: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-54vdv, container csi-attacher
  Jan 21 21:31:48.611: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-4qprp
  Jan 21 21:31:48.611: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-54vdv, container csi-resizer
  Jan 21 21:31:48.612: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-4qprp, container azuredisk
  Jan 21 21:31:48.612: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-54vdv, container liveness-probe
  Jan 21 21:31:48.612: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-b768v, container liveness-probe
  Jan 21 21:31:48.666: INFO: Error starting logs stream for pod calico-system/calico-node-4ddpz, container calico-node: container "calico-node" in pod "calico-node-4ddpz" is waiting to start: PodInitializing
  Jan 21 21:31:48.667: INFO: Error starting logs stream for pod calico-system/csi-node-driver-q2wk5, container csi-node-driver-registrar: container "csi-node-driver-registrar" in pod "csi-node-driver-q2wk5" is waiting to start: ContainerCreating
  Jan 21 21:31:48.668: INFO: Error starting logs stream for pod calico-system/csi-node-driver-q2wk5, container calico-csi: container "calico-csi" in pod "csi-node-driver-q2wk5" is waiting to start: ContainerCreating
  Jan 21 21:31:48.718: INFO: Fetching kube-system pod logs took 1.230677654s
  Jan 21 21:31:48.718: INFO: Dumping workload cluster mhc-remediation-liyacx/mhc-remediation-kyct31 Azure activity log
  Jan 21 21:31:48.718: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-sll49, container tigera-operator
  Jan 21 21:31:48.718: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-sll49
  Jan 21 21:31:50.785: INFO: Fetching activity logs took 2.066843904s
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-liyacx" namespace @ 01/21/23 21:31:50.785
... skipping 35 lines ...
  configmap/cni-quick-start-rivhm1-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-quick-start-rivhm1 created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine quick-start-rivhm1-md-win-745f6bf4bf-fmk5d, Cluster quick-start-uul0ri/quick-start-rivhm1: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Failed to get logs for Machine quick-start-rivhm1-md-win-745f6bf4bf-xdssh, Cluster quick-start-uul0ri/quick-start-rivhm1: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sat, 21 Jan 2023 21:20:51 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "quick-start" test spec @ 01/21/23 21:20:51.554
  INFO: Creating namespace quick-start-uul0ri
... skipping 428 lines ...
  Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-8fj9d, container kube-scheduler
  Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-dspj8, container kube-scheduler
  Jan 21 21:37:12.249: INFO: Collecting events for Pod kube-system/etcd-mhc-remediation-x44xcx-control-plane-8fj9d
  Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/etcd-mhc-remediation-x44xcx-control-plane-dspj8, container etcd
  Jan 21 21:37:12.249: INFO: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-lmcnb, container kube-scheduler
  Jan 21 21:37:12.249: INFO: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-x44xcx-control-plane-dspj8
  Jan 21 21:37:12.447: INFO: Error starting logs stream for pod calico-system/calico-node-gk28s, container calico-node: container "calico-node" in pod "calico-node-gk28s" is waiting to start: PodInitializing
  Jan 21 21:37:12.594: INFO: Fetching kube-system pod logs took 1.512045228s
  Jan 21 21:37:12.595: INFO: Dumping workload cluster mhc-remediation-g1mn07/mhc-remediation-x44xcx Azure activity log
  Jan 21 21:37:12.595: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-k74p9, container tigera-operator
  Jan 21 21:37:12.595: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-k74p9
  Jan 21 21:37:16.407: INFO: Fetching activity logs took 3.812069997s
  STEP: Dumping all the Cluster API resources in the "mhc-remediation-g1mn07" namespace @ 01/21/23 21:37:16.407
... skipping 14 lines ...
------------------------------
• [1333.795 seconds]
Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108

  Captured StdOut/StdErr Output >>
  2023/01/21 21:20:51 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-sy1b4u-md-0 created
  cluster.cluster.x-k8s.io/self-hosted-sy1b4u created
  machinedeployment.cluster.x-k8s.io/self-hosted-sy1b4u-md-0 created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/self-hosted-sy1b4u-control-plane created
  azurecluster.infrastructure.cluster.x-k8s.io/self-hosted-sy1b4u created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created
... skipping 365 lines ...
  Jan 21 21:38:41.412: INFO: Collecting events for Pod calico-system/calico-node-jbdn9
  Jan 21 21:38:41.412: INFO: Creating log watcher for controller calico-system/csi-node-driver-854bq, container csi-node-driver-registrar
  Jan 21 21:38:41.412: INFO: Creating log watcher for controller calico-system/csi-node-driver-8vnwx, container calico-csi
  Jan 21 21:38:41.413: INFO: Creating log watcher for controller calico-system/csi-node-driver-8vnwx, container csi-node-driver-registrar
  Jan 21 21:38:41.413: INFO: Creating log watcher for controller calico-system/calico-typha-5b6456dd7b-th9lx, container calico-typha
  Jan 21 21:38:41.413: INFO: Collecting events for Pod calico-system/csi-node-driver-8vnwx
  Jan 21 21:38:41.525: INFO: Error starting logs stream for pod calico-system/csi-node-driver-8vnwx, container calico-csi: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.525: INFO: Error starting logs stream for pod calico-system/calico-node-jbdn9, container calico-node: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.525: INFO: Error starting logs stream for pod calico-system/csi-node-driver-8vnwx, container csi-node-driver-registrar: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.609: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-2752d, container coredns
  Jan 21 21:38:41.610: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-2752d
  Jan 21 21:38:41.610: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-gqz6v
  Jan 21 21:38:41.610: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-gc67m, container csi-provisioner
  Jan 21 21:38:41.610: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-gqz6v, container coredns
  Jan 21 21:38:41.610: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-gc67m, container csi-resizer
... skipping 27 lines ...
  Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-x2wbd, container azuredisk
  Jan 21 21:38:41.614: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-gc67m
  Jan 21 21:38:41.614: INFO: Collecting events for Pod kube-system/etcd-node-drain-1w5121-control-plane-98sww
  Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/etcd-node-drain-1w5121-control-plane-98sww, container etcd
  Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/etcd-node-drain-1w5121-control-plane-vqwsc, container etcd
  Jan 21 21:38:41.614: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-trn2b, container node-driver-registrar
  Jan 21 21:38:41.760: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x2wbd, container azuredisk: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-1w5121-control-plane-98sww, container kube-controller-manager: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x2wbd, container liveness-probe: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-proxy-mzqcd, container kube-proxy: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-x2wbd, container node-driver-registrar: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-1w5121-control-plane-98sww, container kube-scheduler: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.784: INFO: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-1w5121-control-plane-98sww, container kube-apiserver: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.785: INFO: Error starting logs stream for pod kube-system/etcd-node-drain-1w5121-control-plane-98sww, container etcd: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:41.787: INFO: Creating log watcher for controller node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-8mr96, container web
  Jan 21 21:38:41.787: INFO: Creating log watcher for controller node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-fj4kk, container web
  Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-dl2tb
  Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-8mr96
  Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-r7l-8598949b8b-6pjgj
  Jan 21 21:38:41.787: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-fj4kk
... skipping 8 lines ...
  Jan 21 21:38:41.789: INFO: Collecting events for Pod node-drain-00idlo-unevictable-workload/unevictable-pod-r7l-8598949b8b-x6w94
  Jan 21 21:38:41.790: INFO: Creating log watcher for controller node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-dl2tb, container web
  Jan 21 21:38:41.953: INFO: Fetching kube-system pod logs took 1.687029165s
  Jan 21 21:38:41.953: INFO: Dumping workload cluster node-drain-00idlo/node-drain-1w5121 Azure activity log
  Jan 21 21:38:41.953: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-p92j9, container tigera-operator
  Jan 21 21:38:41.954: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-p92j9
  Jan 21 21:38:41.956: INFO: Error starting logs stream for pod node-drain-00idlo-unevictable-workload/unevictable-pod-ph5-6f8c44cbdd-dl2tb, container web: pods "node-drain-1w5121-control-plane-98sww" not found
  Jan 21 21:38:45.204: INFO: Fetching activity logs took 3.250796871s
  STEP: Dumping all the Cluster API resources in the "node-drain-00idlo" namespace @ 01/21/23 21:38:45.204
  STEP: Deleting cluster node-drain-00idlo/node-drain-1w5121 @ 01/21/23 21:38:45.508
  STEP: Deleting cluster node-drain-1w5121 @ 01/21/23 21:38:45.527
  INFO: Waiting for the Cluster node-drain-00idlo/node-drain-1w5121 to be deleted
  STEP: Waiting for cluster node-drain-1w5121 to be deleted @ 01/21/23 21:38:45.537
... skipping 31 lines ...
  configmap/cni-md-scale-r58z8z-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-md-scale-r58z8z created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine md-scale-r58z8z-md-win-dcfb877df-6gcs6, Cluster md-scale-jhfle5/md-scale-r58z8z: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Failed to get logs for Machine md-scale-r58z8z-md-win-dcfb877df-lzkzm, Cluster md-scale-jhfle5/md-scale-r58z8z: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sat, 21 Jan 2023 21:20:51 UTC on Ginkgo node 2 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "md-scale" test spec @ 01/21/23 21:20:51.731
  INFO: Creating namespace md-scale-jhfle5
... skipping 383 lines ...
  Jan 21 21:43:14.968: INFO: Creating log watcher for controller calico-system/calico-typha-696958d777-cxckm, container calico-typha
  Jan 21 21:43:14.968: INFO: Creating log watcher for controller calico-system/calico-node-windows-lsbmh, container calico-node-startup
  Jan 21 21:43:14.968: INFO: Collecting events for Pod calico-system/calico-typha-696958d777-cxckm
  Jan 21 21:43:14.969: INFO: Creating log watcher for controller calico-system/csi-node-driver-cckct, container calico-csi
  Jan 21 21:43:14.969: INFO: Collecting events for Pod calico-system/calico-node-k8wct
  Jan 21 21:43:14.969: INFO: Creating log watcher for controller calico-system/calico-node-windows-lsbmh, container calico-node-felix
  Jan 21 21:43:15.083: INFO: Error starting logs stream for pod calico-system/csi-node-driver-cckct, container csi-node-driver-registrar: pods "machine-pool-tpn23o-mp-0000002" not found
  Jan 21 21:43:15.102: INFO: Error starting logs stream for pod calico-system/calico-node-windows-lsbmh, container calico-node-felix: pods "win-p-win000002" not found
  Jan 21 21:43:15.168: INFO: Error starting logs stream for pod calico-system/csi-node-driver-cckct, container calico-csi: pods "machine-pool-tpn23o-mp-0000002" not found
  Jan 21 21:43:15.215: INFO: Error starting logs stream for pod calico-system/calico-node-k8wct, container calico-node: pods "machine-pool-tpn23o-mp-0000002" not found
  Jan 21 21:43:15.215: INFO: Error starting logs stream for pod calico-system/calico-node-windows-lsbmh, container calico-node-startup: pods "win-p-win000002" not found
  Jan 21 21:43:15.223: INFO: Creating log watcher for controller kube-system/containerd-logger-kb929, container containerd-logger
  Jan 21 21:43:15.225: INFO: Collecting events for Pod kube-system/containerd-logger-kb929
  Jan 21 21:43:15.225: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-xq8rn, container liveness-probe
  Jan 21 21:43:15.225: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-xq8rn, container node-driver-registrar
  Jan 21 21:43:15.227: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-89qcn, container coredns
  Jan 21 21:43:15.228: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-xq8rn, container azuredisk
... skipping 29 lines ...
  Jan 21 21:43:15.238: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-win-vrhq8
  Jan 21 21:43:15.238: INFO: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-tpn23o-control-plane-blqdl, container kube-controller-manager
  Jan 21 21:43:15.238: INFO: Collecting events for Pod kube-system/kube-apiserver-machine-pool-tpn23o-control-plane-blqdl
  Jan 21 21:43:15.238: INFO: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-tpn23o-control-plane-blqdl
  Jan 21 21:43:15.238: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-6xfnm, container node-driver-registrar
  Jan 21 21:43:15.238: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-vrhq8, container liveness-probe
  Jan 21 21:43:15.407: INFO: Error starting logs stream for pod kube-system/containerd-logger-kb929, container containerd-logger: pods "win-p-win000002" not found
  Jan 21 21:43:15.408: INFO: Fetching kube-system pod logs took 1.599592666s
  Jan 21 21:43:15.408: INFO: Dumping workload cluster machine-pool-hp73in/machine-pool-tpn23o Azure activity log
  Jan 21 21:43:15.408: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-r78bk, container tigera-operator
  Jan 21 21:43:15.408: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-r78bk
  Jan 21 21:43:15.410: INFO: Error starting logs stream for pod kube-system/kube-proxy-windows-7cjgp, container kube-proxy: pods "win-p-win000002" not found
  Jan 21 21:43:15.410: INFO: Error starting logs stream for pod kube-system/csi-proxy-6pmg4, container csi-proxy: pods "win-p-win000002" not found
  Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-6xfnm, container liveness-probe: pods "machine-pool-tpn23o-mp-0000002" not found
  Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/kube-proxy-ctx2g, container kube-proxy: pods "machine-pool-tpn23o-mp-0000002" not found
  Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrhq8, container azuredisk: pods "win-p-win000002" not found
  Jan 21 21:43:15.411: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-6xfnm, container node-driver-registrar: pods "machine-pool-tpn23o-mp-0000002" not found
  Jan 21 21:43:15.412: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrhq8, container liveness-probe: pods "win-p-win000002" not found
  Jan 21 21:43:15.414: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-win-vrhq8, container node-driver-registrar: pods "win-p-win000002" not found
  Jan 21 21:43:15.414: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-6xfnm, container azuredisk: pods "machine-pool-tpn23o-mp-0000002" not found
  Jan 21 21:43:18.986: INFO: Fetching activity logs took 3.57833243s
  STEP: Dumping all the Cluster API resources in the "machine-pool-hp73in" namespace @ 01/21/23 21:43:18.986
  STEP: Deleting cluster machine-pool-hp73in/machine-pool-tpn23o @ 01/21/23 21:43:19.6
  STEP: Deleting cluster machine-pool-tpn23o @ 01/21/23 21:43:19.632
  INFO: Waiting for the Cluster machine-pool-hp73in/machine-pool-tpn23o to be deleted
  STEP: Waiting for cluster machine-pool-tpn23o to be deleted @ 01/21/23 21:43:19.652
... skipping 5 lines ...
  << Timeline
------------------------------
[SynchronizedAfterSuite] PASSED [0.000 seconds]
[SynchronizedAfterSuite] 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116
------------------------------
• [FAILED] [2069.861 seconds]
Running the Cluster API E2E tests Running the MachineDeployment rollout spec [AfterEach] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  [AfterEach] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103
  [It] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71

  Captured StdOut/StdErr Output >>
  cluster.cluster.x-k8s.io/md-rollout-p7ujbw created
... skipping 14 lines ...
  configmap/cni-md-rollout-p7ujbw-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-md-rollout-p7ujbw created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine md-rollout-p7ujbw-md-win-7bc6f966b4-689qn, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [dialing from control plane to target node at md-rollou-9wcwt: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-9wcwt' under resource group 'capz-e2e-zu0ndi' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
  Failed to get logs for Machine md-rollout-p7ujbw-md-win-7bc6f966b4-6mkln, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [dialing from control plane to target node at md-rollou-gjvs7: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-gjvs7' under resource group 'capz-e2e-zu0ndi' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
  Failed to get logs for Machine md-rollout-p7ujbw-md-win-d868d747d-7tzcx, Cluster md-rollout-kyz24f/md-rollout-p7ujbw: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sat, 21 Jan 2023 21:20:51 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "md-rollout" test spec @ 01/21/23 21:20:51.731
  INFO: Creating namespace md-rollout-kyz24f
... skipping 127 lines ...
  Jan 21 21:45:08.499: INFO: Collecting logs for Windows node md-rollou-z5xcg in cluster md-rollout-p7ujbw in namespace md-rollout-kyz24f

  Jan 21 21:49:46.617: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-z5xcg to /logs/artifacts/clusters/md-rollout-p7ujbw/machines/md-rollout-p7ujbw-md-win-d868d747d-7tzcx/crashdumps.tar
  Jan 21 21:50:31.810: INFO: Collecting boot logs for AzureMachine md-rollout-p7ujbw-md-win-63mrcz-z5xcg

  Jan 21 21:50:33.203: INFO: Dumping workload cluster md-rollout-kyz24f/md-rollout-p7ujbw kube-system pod logs
  [FAILED] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/21/23 21:53:54.651
  Jan 21 21:53:54.651: INFO: FAILED!
  Jan 21 21:53:54.651: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec
  STEP: Redacting sensitive information from logs @ 01/21/23 21:53:54.651
  INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" started at Sat, 21 Jan 2023 21:55:21 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  << Timeline

  [FAILED] Failed to get controller-runtime client
  Unexpected error:
      <*url.Error | 0xc00129cf90>: {
          Op: "Get",
          URL: "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s",
          Err: <http.tlsHandshakeTimeoutError>{},
      }
      Get "https://md-rollout-p7ujbw-92d0d2fc.northeurope.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout
  occurred
... skipping 26 lines ...
[ReportAfterSuite] PASSED [0.012 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Summarizing 1 Failure:
  [FAIL] Running the Cluster API E2E tests Running the MachineDeployment rollout spec [AfterEach] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193

Ran 8 of 26 Specs in 2229.032 seconds
FAIL! -- 7 Passed | 1 Failed | 0 Pending | 18 Skipped

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423
... skipping 85 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

--- FAIL: TestE2E (2227.41s)
FAIL

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423
... skipping 6 lines ...

PASS


Ginkgo ran 1 suite in 39m25.550553681s

Test Suite Failed
make[1]: *** [Makefile:655: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:664: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...