This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 27 succeeded
Started2023-01-28 21:13
Elapsed46m0s
Revisionrelease-1.7

Test Failures


capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields 34m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sMachineDeployment\srollout\sspec\sShould\ssuccessfully\supgrade\sMachines\supon\schanges\sin\srelevant\sMachineDeployment\sfields$'
[FAILED] Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc000f199b0>: {
        Op: "Get",
        URL: "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <http.tlsHandshakeTimeoutError>{},
    }
    Get "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout
occurred
In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/28/23 21:56:24.601

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files


Show 27 Passed Tests

Show 18 Skipped Tests

Error lines from build-log.txt

... skipping 828 lines ...
------------------------------
• [1135.333 seconds]
Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108

  Captured StdOut/StdErr Output >>
  2023/01/28 21:23:04 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-0bbdmk-md-0 created
  cluster.cluster.x-k8s.io/self-hosted-0bbdmk created
  machinedeployment.cluster.x-k8s.io/self-hosted-0bbdmk-md-0 created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/self-hosted-0bbdmk-control-plane created
  azurecluster.infrastructure.cluster.x-k8s.io/self-hosted-0bbdmk created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created
... skipping 207 lines ...
  Jan 28 21:35:05.971: INFO: Fetching activity logs took 1.422098335s
  Jan 28 21:35:05.971: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace
  Jan 28 21:35:06.340: INFO: Deleting all clusters in the self-hosted namespace
  STEP: Deleting cluster self-hosted-0bbdmk @ 01/28/23 21:35:06.364
  INFO: Waiting for the Cluster self-hosted/self-hosted-0bbdmk to be deleted
  STEP: Waiting for cluster self-hosted-0bbdmk to be deleted @ 01/28/23 21:35:06.373
  INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8c96b57bb-khkq9, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-6bc947c55b-mvrjg, container manager: http2: client connection lost
  Jan 28 21:39:46.518: INFO: Deleting namespace used for hosting the "self-hosted" test spec
  INFO: Deleting namespace self-hosted
  Jan 28 21:39:46.536: INFO: Checking if any resources are left over in Azure for spec "self-hosted"
  STEP: Redacting sensitive information from logs @ 01/28/23 21:39:47.115
  Jan 28 21:40:48.647: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec
  STEP: Redacting sensitive information from logs @ 01/28/23 21:40:48.647
... skipping 27 lines ...
  configmap/cni-quick-start-jh0u1r-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-quick-start-jh0u1r created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine quick-start-jh0u1r-md-win-86b9b4f868-f66dv, Cluster quick-start-4kkwys/quick-start-jh0u1r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Failed to get logs for Machine quick-start-jh0u1r-md-win-86b9b4f868-pffvh, Cluster quick-start-4kkwys/quick-start-jh0u1r: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sat, 28 Jan 2023 21:23:04 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "quick-start" test spec @ 01/28/23 21:23:04.764
  INFO: Creating namespace quick-start-4kkwys
... skipping 603 lines ...
  Jan 28 21:41:11.486: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-resizer
  Jan 28 21:41:11.486: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container liveness-probe
  Jan 28 21:41:11.487: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-attacher
  Jan 28 21:41:11.487: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container azuredisk
  Jan 28 21:41:11.488: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-snapshotter
  Jan 28 21:41:11.488: INFO: Describing Pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2
  Jan 28 21:41:11.642: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-snapshotter: container "csi-snapshotter" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating
  Jan 28 21:41:11.647: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container liveness-probe: container "liveness-probe" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating
  Jan 28 21:41:11.647: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-attacher: container "csi-attacher" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating
  Jan 28 21:41:11.656: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container azuredisk: container "azuredisk" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating
  Jan 28 21:41:11.656: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-provisioner: container "csi-provisioner" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating
  Jan 28 21:41:11.656: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-545d478dbf-qjlp2, container csi-resizer: container "csi-resizer" in pod "csi-azuredisk-controller-545d478dbf-qjlp2" is waiting to start: ContainerCreating
  Jan 28 21:41:11.708: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qcnv7, container node-driver-registrar
  Jan 28 21:41:11.708: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qcnv7, container liveness-probe
  Jan 28 21:41:11.708: INFO: Describing Pod kube-system/csi-azuredisk-node-qcnv7
  Jan 28 21:41:11.709: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-qcnv7, container azuredisk
  Jan 28 21:41:11.918: INFO: Creating log watcher for controller kube-system/etcd-node-drain-en543w-control-plane-nt2bn, container etcd
  Jan 28 21:41:11.918: INFO: Describing Pod kube-system/etcd-node-drain-en543w-control-plane-nt2bn
... skipping 65 lines ...
  configmap/cni-md-scale-yo3qx5-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-md-scale-yo3qx5 created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine md-scale-yo3qx5-md-win-76648b6b95-2mws6, Cluster md-scale-fhg3vm/md-scale-yo3qx5: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Failed to get logs for Machine md-scale-yo3qx5-md-win-76648b6b95-752f9, Cluster md-scale-fhg3vm/md-scale-yo3qx5: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sat, 28 Jan 2023 21:23:04 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "md-scale" test spec @ 01/28/23 21:23:04.772
  INFO: Creating namespace md-scale-fhg3vm
... skipping 370 lines ...
  Jan 28 21:46:00.101: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-54c7fc9cf5-qvzgs, container calico-apiserver
  Jan 28 21:46:00.101: INFO: Describing Pod calico-apiserver/calico-apiserver-54c7fc9cf5-qvzgs
  Jan 28 21:46:00.332: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-sjqkz, container calico-kube-controllers
  Jan 28 21:46:00.332: INFO: Describing Pod calico-system/calico-kube-controllers-594d54f99-sjqkz
  Jan 28 21:46:00.588: INFO: Creating log watcher for controller calico-system/calico-node-47k7b, container calico-node
  Jan 28 21:46:00.590: INFO: Describing Pod calico-system/calico-node-47k7b
  Jan 28 21:46:00.715: INFO: Error starting logs stream for pod calico-system/calico-node-47k7b, container calico-node: pods "machine-pool-xvkbmk-mp-0000002" not found
  Jan 28 21:46:00.840: INFO: Creating log watcher for controller calico-system/calico-node-z6js9, container calico-node
  Jan 28 21:46:00.840: INFO: Describing Pod calico-system/calico-node-z6js9
  Jan 28 21:46:01.256: INFO: Creating log watcher for controller calico-system/calico-typha-dbfdfbdf9-94278, container calico-typha
  Jan 28 21:46:01.256: INFO: Describing Pod calico-system/calico-typha-dbfdfbdf9-94278
  Jan 28 21:46:01.472: INFO: Creating log watcher for controller calico-system/csi-node-driver-5wwtq, container calico-csi
  Jan 28 21:46:01.472: INFO: Creating log watcher for controller calico-system/csi-node-driver-5wwtq, container csi-node-driver-registrar
  Jan 28 21:46:01.473: INFO: Describing Pod calico-system/csi-node-driver-5wwtq
  Jan 28 21:46:01.686: INFO: Creating log watcher for controller calico-system/csi-node-driver-rzlfp, container calico-csi
  Jan 28 21:46:01.687: INFO: Creating log watcher for controller calico-system/csi-node-driver-rzlfp, container csi-node-driver-registrar
  Jan 28 21:46:01.687: INFO: Describing Pod calico-system/csi-node-driver-rzlfp
  Jan 28 21:46:01.792: INFO: Error starting logs stream for pod calico-system/csi-node-driver-rzlfp, container csi-node-driver-registrar: pods "machine-pool-xvkbmk-mp-0000002" not found
  Jan 28 21:46:01.792: INFO: Error starting logs stream for pod calico-system/csi-node-driver-rzlfp, container calico-csi: pods "machine-pool-xvkbmk-mp-0000002" not found
  Jan 28 21:46:01.901: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-5rffr, container coredns
  Jan 28 21:46:01.902: INFO: Describing Pod kube-system/coredns-57575c5f89-5rffr
  Jan 28 21:46:02.115: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-82tn2, container coredns
  Jan 28 21:46:02.115: INFO: Describing Pod kube-system/coredns-57575c5f89-82tn2
  Jan 28 21:46:02.336: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wc24c, container csi-snapshotter
  Jan 28 21:46:02.336: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wc24c, container csi-resizer
... skipping 3 lines ...
  Jan 28 21:46:02.337: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-wc24c, container csi-attacher
  Jan 28 21:46:02.337: INFO: Describing Pod kube-system/csi-azuredisk-controller-545d478dbf-wc24c
  Jan 28 21:46:02.565: INFO: Describing Pod kube-system/csi-azuredisk-node-pmfvj
  Jan 28 21:46:02.565: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pmfvj, container node-driver-registrar
  Jan 28 21:46:02.565: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pmfvj, container liveness-probe
  Jan 28 21:46:02.565: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-pmfvj, container azuredisk
  Jan 28 21:46:02.671: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pmfvj, container liveness-probe: pods "machine-pool-xvkbmk-mp-0000002" not found
  Jan 28 21:46:02.672: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pmfvj, container node-driver-registrar: pods "machine-pool-xvkbmk-mp-0000002" not found
  Jan 28 21:46:02.673: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-pmfvj, container azuredisk: pods "machine-pool-xvkbmk-mp-0000002" not found
  Jan 28 21:46:02.963: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-vmd5g, container liveness-probe
  Jan 28 21:46:02.963: INFO: Describing Pod kube-system/csi-azuredisk-node-vmd5g
  Jan 28 21:46:02.964: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-vmd5g, container node-driver-registrar
  Jan 28 21:46:02.964: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-vmd5g, container azuredisk
  Jan 28 21:46:03.368: INFO: Describing Pod kube-system/etcd-machine-pool-xvkbmk-control-plane-8n786
  Jan 28 21:46:03.368: INFO: Creating log watcher for controller kube-system/etcd-machine-pool-xvkbmk-control-plane-8n786, container etcd
... skipping 2 lines ...
  Jan 28 21:46:04.160: INFO: Describing Pod kube-system/kube-controller-manager-machine-pool-xvkbmk-control-plane-8n786
  Jan 28 21:46:04.160: INFO: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-xvkbmk-control-plane-8n786, container kube-controller-manager
  Jan 28 21:46:04.603: INFO: Creating log watcher for controller kube-system/kube-proxy-2r667, container kube-proxy
  Jan 28 21:46:04.603: INFO: Describing Pod kube-system/kube-proxy-2r667
  Jan 28 21:46:04.962: INFO: Describing Pod kube-system/kube-proxy-5mv8q
  Jan 28 21:46:04.962: INFO: Creating log watcher for controller kube-system/kube-proxy-5mv8q, container kube-proxy
  Jan 28 21:46:05.067: INFO: Error starting logs stream for pod kube-system/kube-proxy-5mv8q, container kube-proxy: pods "machine-pool-xvkbmk-mp-0000002" not found
  Jan 28 21:46:05.360: INFO: Describing Pod kube-system/kube-scheduler-machine-pool-xvkbmk-control-plane-8n786
  Jan 28 21:46:05.360: INFO: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-xvkbmk-control-plane-8n786, container kube-scheduler
  Jan 28 21:46:05.758: INFO: Fetching kube-system pod logs took 6.984216025s
  Jan 28 21:46:05.758: INFO: Dumping workload cluster machine-pool-nnrpni/machine-pool-xvkbmk Azure activity log
  Jan 28 21:46:05.758: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-k9hg4, container tigera-operator
  Jan 28 21:46:05.758: INFO: Describing Pod tigera-operator/tigera-operator-65d6bf4d4f-k9hg4
... skipping 11 lines ...
  << Timeline
------------------------------
[SynchronizedAfterSuite] PASSED [0.000 seconds]
[SynchronizedAfterSuite] 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116
------------------------------
• [FAILED] [2081.405 seconds]
Running the Cluster API E2E tests Running the MachineDeployment rollout spec [AfterEach] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  [AfterEach] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103
  [It] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71

  Captured StdOut/StdErr Output >>
  cluster.cluster.x-k8s.io/md-rollout-ilisli created
... skipping 14 lines ...
  configmap/cni-md-rollout-ilisli-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-md-rollout-ilisli created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine md-rollout-ilisli-md-win-794dbb7cf-6zr95, Cluster md-rollout-tmdobj/md-rollout-ilisli: [dialing from control plane to target node at md-rollou-qgvcj: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-qgvcj' under resource group 'capz-e2e-onmgii' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
  Failed to get logs for Machine md-rollout-ilisli-md-win-794dbb7cf-jwv2f, Cluster md-rollout-tmdobj/md-rollout-ilisli: [dialing from control plane to target node at md-rollou-mxf9p: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-mxf9p' under resource group 'capz-e2e-onmgii' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
  Failed to get logs for Machine md-rollout-ilisli-md-win-c9d858f64-fzhnk, Cluster md-rollout-tmdobj/md-rollout-ilisli: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sat, 28 Jan 2023 21:23:04 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "md-rollout" test spec @ 01/28/23 21:23:04.898
  INFO: Creating namespace md-rollout-tmdobj
... skipping 127 lines ...
  Jan 28 21:47:42.651: INFO: Collecting logs for Windows node md-rollou-9nrb8 in cluster md-rollout-ilisli in namespace md-rollout-tmdobj

  Jan 28 21:52:18.638: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-9nrb8 to /logs/artifacts/clusters/md-rollout-ilisli/machines/md-rollout-ilisli-md-win-c9d858f64-fzhnk/crashdumps.tar
  Jan 28 21:53:01.769: INFO: Collecting boot logs for AzureMachine md-rollout-ilisli-md-win-p08ive-9nrb8

  Jan 28 21:53:03.126: INFO: Dumping workload cluster md-rollout-tmdobj/md-rollout-ilisli kube-system pod logs
  [FAILED] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193 @ 01/28/23 21:56:24.601
  Jan 28 21:56:24.601: INFO: FAILED!
  Jan 28 21:56:24.601: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec
  STEP: Redacting sensitive information from logs @ 01/28/23 21:56:24.601
  INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" started at Sat, 28 Jan 2023 21:57:46 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  << Timeline

  [FAILED] Failed to get controller-runtime client
  Unexpected error:
      <*url.Error | 0xc000f199b0>: {
          Op: "Get",
          URL: "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s",
          Err: <http.tlsHandshakeTimeoutError>{},
      }
      Get "https://md-rollout-ilisli-27b77228.uksouth.cloudapp.azure.com:6443/api?timeout=32s": net/http: TLS handshake timeout
  occurred
... skipping 26 lines ...
[ReportAfterSuite] PASSED [0.011 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Summarizing 1 Failure:
  [FAIL] Running the Cluster API E2E tests Running the MachineDeployment rollout spec [AfterEach] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:193

Ran 8 of 26 Specs in 2239.954 seconds
FAIL! -- 7 Passed | 1 Failed | 0 Pending | 18 Skipped

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423
... skipping 99 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

--- FAIL: TestE2E (2238.37s)
FAIL


Ginkgo ran 1 suite in 39m32.185979028s

Test Suite Failed
make[1]: *** [Makefile:655: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:664: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...