This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 27 succeeded
Started2023-01-15 21:07
Elapsed4h7m
Revisionrelease-1.7

Test Failures


capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields 3h54m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sMachineDeployment\srollout\sspec\sShould\ssuccessfully\supgrade\sMachines\supon\schanges\sin\srelevant\sMachineDeployment\sfields$'
[TIMEDOUT] A suite timeout occurred
In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103 @ 01/16/23 01:12:35.852

This is the Progress Report generated when the suite timeout occurred:
  Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields (Spec Runtime: 3h54m2.295s)
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:71
    In [AfterEach] (Node Runtime: 3h40m29.238s)
      /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103
      At [By Step] Dumping logs from the "md-rollout-tskhi1" workload cluster (Step Runtime: 3h40m29.238s)
        /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51

      Spec Goroutine
      goroutine 6453 [semacquire, 217 minutes]
        sync.runtime_Semacquire(0xc00101e140?)
          /usr/local/go/src/runtime/sema.go:62
        sync.(*WaitGroup).Wait(0xc000a9c890?)
          /usr/local/go/src/sync/waitgroup.go:139
        sigs.k8s.io/kind/pkg/errors.AggregateConcurrent({0xc000a9c890, 0x2, 0x3df40a3?})
          /home/prow/go/pkg/mod/sigs.k8s.io/kind@v0.17.0/pkg/errors/concurrent.go:54
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.collectLogsFromNode(0xc0015401a0, {0xc00256e8d0, 0xf}, 0x1, {0xc001336b40, 0x5d})
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:158
            | errors = append(errors, kinderrors.AggregateConcurrent(windowsK8sLogs(execToPathFn)))
            | errors = append(errors, kinderrors.AggregateConcurrent(windowsNetworkLogs(execToPathFn)))
            > errors = append(errors, kinderrors.AggregateConcurrent(windowsCrashDumpLogs(execToPathFn)))
            | errors = append(errors, sftpCopyFile(controlPlaneEndpoint, hostname, sshPort, "/c:/crashdumps.tar", filepath.Join(outputPath, "crashdumps.tar")))
            | 
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureLogCollector.CollectMachineLog({}, {0x42112a0, 0xc000138008}, {0x4221998, 0xc000598620}, 0xc000c0c920, {0xc001336b40, 0x5d})
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:74
            | hostname := getHostname(m, isAzureMachineWindows(am))
            | 
            > if err := collectLogsFromNode(cluster, hostname, isAzureMachineWindows(am), outputPath); err != nil {
            | 	errs = append(errs, err)
            | }
        sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).CollectWorkloadClusterLogs(0xc0000715c0, {0x42112a0?, 0xc000138008}, {0xc000e2d2f0, 0x11}, {0xc000e2d2d8, 0x11}, {0xc000fbabd0, 0x2a})
          /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_proxy.go:265
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.(*AzureClusterProxy).CollectWorkloadClusterLogs(0xc0000d72a0, {0x42112a0, 0xc000138008}, {0xc000e2d2f0, 0x11}, {0xc000e2d2d8, 0x11}, {0xc000fbabd0, 0x2a})
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_clusterproxy.go:93
            | func (acp *AzureClusterProxy) CollectWorkloadClusterLogs(ctx context.Context, namespace, name, outputPath string) {
            | 	Logf("Dumping workload cluster %s/%s logs", namespace, name)
            > 	acp.ClusterProxy.CollectWorkloadClusterLogs(ctx, namespace, name, outputPath)
            | 
            | 	aboveMachinesPath := strings.Replace(outputPath, "/machines", "", 1)
      > sigs.k8s.io/cluster-api/test/e2e.dumpSpecResourcesAndCleanup({0x42112a0, 0xc000138008}, {0x3d57521, 0xa}, {0x4223910, 0xc0000d72a0}, {0xc000593500, 0xf}, 0xc000d711e0, 0xc0005db030, ...)
          /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:70
            | 
            | // Dump all the logs from the workload cluster before deleting them.
            > clusterProxy.CollectWorkloadClusterLogs(ctx, cluster.Namespace, cluster.Name, filepath.Join(artifactFolder, "clusters", cluster.Name))
            | 
            | Byf("Dumping all the Cluster API resources in the %q namespace", namespace.Name)
      > sigs.k8s.io/cluster-api/test/e2e.MachineDeploymentRolloutSpec.func3()
          /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:105
            | 	AfterEach(func() {
            | 		// Dumps all the resources in the spec namespace, then cleanups the cluster object and the spec namespace itself.
            > 		dumpSpecResourcesAndCleanup(ctx, specName, input.BootstrapClusterProxy, input.ArtifactFolder, namespace, cancelWatches, clusterResources.Cluster, input.E2EConfig.GetIntervals, input.SkipCleanup)
            | 	})
            | }
        github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x1b8186e, 0xc001880c00})
          /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.0/internal/node.go:445
        github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3()
          /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.0/internal/suite.go:847
        github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
          /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.0/internal/suite.go:834

      Goroutines of Interest
      goroutine 10581 [chan receive, 217 minutes]
        golang.org/x/crypto/ssh.(*handshakeTransport).readPacket(...)
          /home/prow/go/pkg/mod/golang.org/x/crypto@v0.3.0/ssh/handshake.go:187
        golang.org/x/crypto/ssh.(*handshakeTransport).waitSession(0xc0014086e0)
          /home/prow/go/pkg/mod/golang.org/x/crypto@v0.3.0/ssh/handshake.go:154
        golang.org/x/crypto/ssh.(*connection).clientHandshake(0xc001c08200, {0xc00256e8d0, 0xf}, 0xc001619860)
          /home/prow/go/pkg/mod/golang.org/x/crypto@v0.3.0/ssh/client.go:108
        golang.org/x/crypto/ssh.NewClientConn({0x421d4f0, 0xc00180ddd0}, {0xc00256e8d0, 0xf}, 0xc0014b3450)
          /home/prow/go/pkg/mod/golang.org/x/crypto@v0.3.0/ssh/client.go:83
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.getProxiedSSHClient({0xc00347c980, 0x34}, {0xc00256e8d0, 0xf}, {0x3d4834c, 0x2})
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:481
            | 
            | // Establish an authenticated SSH conn over the client -> control plane -> target transport
            > conn, chans, reqs, err := ssh.NewClientConn(c, hostname, config)
            | if err != nil {
            | 	return nil, errors.Wrap(err, "getting a new SSH client connection")
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.execOnHost({0xc00347c980?, 0xc001715d40?}, {0xc00256e8d0?, 0xc001715db0?}, {0x3d4834c?, 0x203000?}, {0x41ef280, 0xc0008dc450}, {0x3e3effc, 0x9d}, ...)
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:493
            | func execOnHost(controlPlaneEndpoint, hostname, port string, f io.StringWriter, command string,
            | 	args ...string) error {
            > 	client, err := getProxiedSSHClient(controlPlaneEndpoint, hostname, port)
            | 	if err != nil {
            | 		return err
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.collectLogsFromNode.func1.1.1()
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:147
            | 		}
            | 		defer f.Close()
            > 		return execOnHost(controlPlaneEndpoint, hostname, sshPort, f, command, args...)
            | 	})
            | }
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.retryWithTimeout.func1()
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/retry.go:41
            | err := wait.PollImmediate(interval, timeout, func() (bool, error) {
            | 	pollError = nil
            > 	err := fn()
            | 	if err != nil {
            | 		pollError = err
        k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x15312f1, 0x0})
          /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:222
        k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x42112a0?, 0xc000138000?}, 0xc001715d30?)
          /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:235
        k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x42112a0, 0xc000138000}, 0xc0017e9128, 0x229de0a?)
          /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:662
        k8s.io/apimachinery/pkg/util/wait.poll({0x42112a0, 0xc000138000}, 0x58?, 0x229cbc5?, 0x18?)
          /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:596
        k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x42112a0, 0xc000138000}, 0x0?, 0xc0013aaea8?, 0x1418927?)
          /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:528
        k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x145f172?, 0xc0013aaee8?, 0x1418927?)
          /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:514
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.retryWithTimeout(0x40?, 0x399ac40?, 0xc000882a80)
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/retry.go:39
            | func retryWithTimeout(interval, timeout time.Duration, fn func() error) error {
            | 	var pollError error
            > 	err := wait.PollImmediate(interval, timeout, func() (bool, error) {
            | 		pollError = nil
            | 		err := fn()
      > sigs.k8s.io/cluster-api-provider-azure/test/e2e.collectLogsFromNode.func1.1()
          /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:141
            | execToPathFn := func(outputFileName, command string, args ...string) func() error {
            | 	return func() error {
            > 		return retryWithTimeout(collectLogInterval, collectLogTimeout, func() error {
            | 			f, err := fileOnHost(filepath.Join(outputPath, outputFileName))
            | 			if err != nil {
        sigs.k8s.io/kind/pkg/errors.AggregateConcurrent.func1()
          /home/prow/go/pkg/mod/sigs.k8s.io/kind@v0.17.0/pkg/errors/concurrent.go:51
        sigs.k8s.io/kind/pkg/errors.AggregateConcurrent
          /home/prow/go/pkg/mod/sigs.k8s.io/kind@v0.17.0/pkg/errors/concurrent.go:49

There were additional failures detected after the initial failure. These are visible in the timeline

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files


Show 27 Passed Tests

Show 18 Skipped Tests

Error lines from build-log.txt

... skipping 808 lines ...
------------------------------
• [909.672 seconds]
Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:108

  Captured StdOut/StdErr Output >>
  2023/01/15 21:18:33 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-s530bc-md-0 created
  cluster.cluster.x-k8s.io/self-hosted-s530bc created
  machinedeployment.cluster.x-k8s.io/self-hosted-s530bc-md-0 created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/self-hosted-s530bc-control-plane created
  azurecluster.infrastructure.cluster.x-k8s.io/self-hosted-s530bc created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created
... skipping 207 lines ...
  Jan 15 21:28:31.426: INFO: Fetching activity logs took 1.271791932s
  Jan 15 21:28:31.426: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace
  Jan 15 21:28:31.925: INFO: Deleting all clusters in the self-hosted namespace
  STEP: Deleting cluster self-hosted-s530bc @ 01/15/23 21:28:31.957
  INFO: Waiting for the Cluster self-hosted/self-hosted-s530bc to be deleted
  STEP: Waiting for cluster self-hosted-s530bc to be deleted @ 01/15/23 21:28:31.972
  INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-c7d88d789-c2fv9, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-6bc947c55b-hpl6k, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8c96b57bb-6jc42, container manager: http2: client connection lost
  Jan 15 21:32:02.102: INFO: Deleting namespace used for hosting the "self-hosted" test spec
  INFO: Deleting namespace self-hosted
  Jan 15 21:32:02.135: INFO: Checking if any resources are left over in Azure for spec "self-hosted"
  STEP: Redacting sensitive information from logs @ 01/15/23 21:32:02.727
  Jan 15 21:32:49.785: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec
  STEP: Redacting sensitive information from logs @ 01/15/23 21:32:49.785
... skipping 27 lines ...
  configmap/cni-quick-start-8aobvz-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-quick-start-8aobvz created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine quick-start-8aobvz-md-win-6ffc7c6994-8s6qs, Cluster quick-start-o5t5kk/quick-start-8aobvz: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Failed to get logs for Machine quick-start-8aobvz-md-win-6ffc7c6994-t9ztz, Cluster quick-start-o5t5kk/quick-start-8aobvz: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sun, 15 Jan 2023 21:18:33 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "quick-start" test spec @ 01/15/23 21:18:33.553
  INFO: Creating namespace quick-start-o5t5kk
... skipping 478 lines ...
  configmap/cni-md-scale-ahnwtf-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-md-scale-ahnwtf created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine md-scale-ahnwtf-md-win-54b7dc498d-6j649, Cluster md-scale-dzpx9d/md-scale-ahnwtf: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Failed to get logs for Machine md-scale-ahnwtf-md-win-54b7dc498d-xqx84, Cluster md-scale-dzpx9d/md-scale-ahnwtf: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sun, 15 Jan 2023 21:18:33 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "md-scale" test spec @ 01/15/23 21:18:33.681
  INFO: Creating namespace md-scale-dzpx9d
... skipping 380 lines ...
  Jan 15 21:39:36.618: INFO: Creating log watcher for controller calico-system/csi-node-driver-zkmxl, container calico-csi
  Jan 15 21:39:36.618: INFO: Collecting events for Pod calico-system/calico-node-j626k
  Jan 15 21:39:36.618: INFO: Creating log watcher for controller calico-system/csi-node-driver-zkmxl, container csi-node-driver-registrar
  Jan 15 21:39:36.618: INFO: Creating log watcher for controller calico-system/calico-typha-865fc4994b-qnbgw, container calico-typha
  Jan 15 21:39:36.618: INFO: Collecting events for Pod calico-system/csi-node-driver-zkmxl
  Jan 15 21:39:36.616: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-2b5q2, container calico-kube-controllers
  Jan 15 21:39:36.705: INFO: Error starting logs stream for pod calico-system/csi-node-driver-zkmxl, container calico-csi: pods "machine-pool-862hcm-mp-0000002" not found
  Jan 15 21:39:36.705: INFO: Error starting logs stream for pod calico-system/calico-node-5q57s, container calico-node: pods "machine-pool-862hcm-mp-0000002" not found
  Jan 15 21:39:36.705: INFO: Error starting logs stream for pod calico-system/csi-node-driver-zkmxl, container csi-node-driver-registrar: pods "machine-pool-862hcm-mp-0000002" not found
  Jan 15 21:39:36.711: INFO: Collecting events for Pod kube-system/coredns-57575c5f89-s42cx
  Jan 15 21:39:36.711: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kkrvc, container liveness-probe
  Jan 15 21:39:36.712: INFO: Collecting events for Pod kube-system/kube-apiserver-machine-pool-862hcm-control-plane-l66m6
  Jan 15 21:39:36.712: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-f6kck, container liveness-probe
  Jan 15 21:39:36.712: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-kkrvc
  Jan 15 21:39:36.712: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-g5fq2, container liveness-probe
... skipping 19 lines ...
  Jan 15 21:39:36.715: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-f6kck, container csi-attacher
  Jan 15 21:39:36.715: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-545d478dbf-f6kck
  Jan 15 21:39:36.715: INFO: Creating log watcher for controller kube-system/kube-proxy-fnkwc, container kube-proxy
  Jan 15 21:39:36.715: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-f6kck, container csi-snapshotter
  Jan 15 21:39:36.715: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-545d478dbf-f6kck, container csi-resizer
  Jan 15 21:39:36.715: INFO: Collecting events for Pod kube-system/kube-proxy-fnkwc
  Jan 15 21:39:36.812: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-g5fq2, container azuredisk: pods "machine-pool-862hcm-mp-0000002" not found
  Jan 15 21:39:36.812: INFO: Error starting logs stream for pod kube-system/kube-proxy-fnkwc, container kube-proxy: pods "machine-pool-862hcm-mp-0000002" not found
  Jan 15 21:39:36.812: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-g5fq2, container node-driver-registrar: pods "machine-pool-862hcm-mp-0000002" not found
  Jan 15 21:39:36.812: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-g5fq2, container liveness-probe: pods "machine-pool-862hcm-mp-0000002" not found
  Jan 15 21:39:36.812: INFO: Fetching kube-system pod logs took 642.72103ms
  Jan 15 21:39:36.813: INFO: Dumping workload cluster machine-pool-2ppyws/machine-pool-862hcm Azure activity log
  Jan 15 21:39:36.813: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-xvg9b, container tigera-operator
  Jan 15 21:39:36.813: INFO: Collecting events for Pod tigera-operator/tigera-operator-65d6bf4d4f-xvg9b
  Jan 15 21:39:40.124: INFO: Fetching activity logs took 3.311914652s
  STEP: Dumping all the Cluster API resources in the "machine-pool-2ppyws" namespace @ 01/15/23 21:39:40.124
... skipping 25 lines ...
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/node-drain-corrm1-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/node-drain-corrm1-md-0 created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine node-drain-corrm1-control-plane-xrbbt, Cluster node-drain-famtnn/node-drain-corrm1: dialing public load balancer at node-drain-corrm1-d8149fd4.eastus.cloudapp.azure.com: dial tcp 20.246.243.17:22: connect: connection timed out
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sun, 15 Jan 2023 21:18:33 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "node-drain" test spec @ 01/15/23 21:18:33.701
  INFO: Creating namespace node-drain-famtnn
... skipping 174 lines ...
  << Timeline
------------------------------
[SynchronizedAfterSuite] PASSED [0.000 seconds]
[SynchronizedAfterSuite] 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116
------------------------------
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-16T01:07:50Z"}
++ early_exit_handler
++ '[' -n 156 ']'
++ kill -TERM 156
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 39 lines ...
  configmap/cni-md-rollout-tskhi1-calico-windows created
  configmap/csi-proxy-addon created
  configmap/containerd-logger-md-rollout-tskhi1 created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine md-rollout-tskhi1-md-win-595cb76f59-9xvms, Cluster md-rollout-ykpxiw/md-rollout-tskhi1: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
  Failed to get logs for Machine md-rollout-tskhi1-md-win-5db74bf4df-4ffrw, Cluster md-rollout-ykpxiw/md-rollout-tskhi1: azuremachines.infrastructure.cluster.x-k8s.io "md-rollout-tskhi1-md-win-ss9v8" not found
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Sun, 15 Jan 2023 21:18:33 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating a namespace for hosting the "md-rollout" test spec @ 01/15/23 21:18:33.728
  INFO: Creating namespace md-rollout-ykpxiw
... skipping 119 lines ...
  Jan 15 21:34:51.606: INFO: Attempting to copy file /c:/crashdumps.tar on node md-rollou-l2rx5 to /logs/artifacts/clusters/md-rollout-tskhi1/machines/md-rollout-tskhi1-md-win-595cb76f59-9xvms/crashdumps.tar
  Jan 15 21:34:53.182: INFO: Collecting boot logs for AzureMachine md-rollout-tskhi1-md-win-llczp6-l2rx5

  Jan 15 21:34:54.091: INFO: Collecting logs for Windows node md-rollou-ffnbp in cluster md-rollout-tskhi1 in namespace md-rollout-ykpxiw

  [TIMEDOUT] in [AfterEach] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103 @ 01/16/23 01:12:35.852
  Jan 16 01:12:35.996: INFO: FAILED!
  Jan 16 01:12:35.996: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec
  STEP: Redacting sensitive information from logs @ 01/16/23 01:12:35.996
  [TIMEDOUT] in [AfterEach] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/16/23 01:13:05.997
  << Timeline

  [TIMEDOUT] A suite timeout occurred
... skipping 78 lines ...
              > conn, chans, reqs, err := ssh.NewClientConn(c, hostname, config)
              | if err != nil {
              | 	return nil, errors.Wrap(err, "getting a new SSH client connection")
        > sigs.k8s.io/cluster-api-provider-azure/test/e2e.execOnHost({0xc00347c980?, 0xc001715d40?}, {0xc00256e8d0?, 0xc001715db0?}, {0x3d4834c?, 0x203000?}, {0x41ef280, 0xc0008dc450}, {0x3e3effc, 0x9d}, ...)
            /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:493
              | func execOnHost(controlPlaneEndpoint, hostname, port string, f io.StringWriter, command string,
              | 	args ...string) error {
              > 	client, err := getProxiedSSHClient(controlPlaneEndpoint, hostname, port)
              | 	if err != nil {
              | 		return err
        > sigs.k8s.io/cluster-api-provider-azure/test/e2e.collectLogsFromNode.func1.1.1()
            /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:147
              | 		}
              | 		defer f.Close()
              > 		return execOnHost(controlPlaneEndpoint, hostname, sshPort, f, command, args...)
              | 	})
              | }
        > sigs.k8s.io/cluster-api-provider-azure/test/e2e.retryWithTimeout.func1()
            /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/retry.go:41
              | err := wait.PollImmediate(interval, timeout, func() (bool, error) {
              | 	pollError = nil
              > 	err := fn()
              | 	if err != nil {
              | 		pollError = err
          k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x15312f1, 0x0})
            /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:222
... skipping 6 lines ...
          k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x42112a0, 0xc000138000}, 0x0?, 0xc0013aaea8?, 0x1418927?)
            /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:528
          k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x145f172?, 0xc0013aaee8?, 0x1418927?)
            /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.4/pkg/util/wait/wait.go:514
        > sigs.k8s.io/cluster-api-provider-azure/test/e2e.retryWithTimeout(0x40?, 0x399ac40?, 0xc000882a80)
            /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/retry.go:39
              | func retryWithTimeout(interval, timeout time.Duration, fn func() error) error {
              | 	var pollError error
              > 	err := wait.PollImmediate(interval, timeout, func() (bool, error) {
              | 		pollError = nil
              | 		err := fn()
        > sigs.k8s.io/cluster-api-provider-azure/test/e2e.collectLogsFromNode.func1.1()
            /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_logcollector.go:141
              | execToPathFn := func(outputFileName, command string, args ...string) func() error {
              | 	return func() error {
              > 		return retryWithTimeout(collectLogInterval, collectLogTimeout, func() error {
              | 			f, err := fileOnHost(filepath.Join(outputPath, outputFileName))
              | 			if err != nil {
          sigs.k8s.io/kind/pkg/errors.AggregateConcurrent.func1()
            /home/prow/go/pkg/mod/sigs.k8s.io/kind@v0.17.0/pkg/errors/concurrent.go:51
          sigs.k8s.io/kind/pkg/errors.AggregateConcurrent
            /home/prow/go/pkg/mod/sigs.k8s.io/kind@v0.17.0/pkg/errors/concurrent.go:49
... skipping 7 lines ...
[SynchronizedAfterSuite] PASSED [12926.057 seconds]
[SynchronizedAfterSuite] 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116

  Timeline >>
  STEP: Tearing down the management cluster @ 01/16/23 01:13:06.169
  INFO: Deleting the kind cluster "capz-e2e" failed. You may need to remove this by hand.
  << Timeline
------------------------------
[ReportAfterSuite] PASSED [0.034 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Summarizing 1 Failure:
  [TIMEDOUT] Running the Cluster API E2E tests Running the MachineDeployment rollout spec [AfterEach] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/md_rollout.go:103

Ran 8 of 26 Specs in 14258.671 seconds
FAIL! - Suite Timeout Elapsed -- 7 Passed | 1 Failed | 0 Pending | 18 Skipped

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:278
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

--- FAIL: TestE2E (14258.72s)
FAIL

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423
... skipping 70 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:278
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0

--- FAIL: TestE2E (14258.45s)
FAIL

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:423
... skipping 6 lines ...

PASS


Ginkgo ran 1 suite in 4h2m19.665521662s

Test Suite Failed
make[1]: *** [Makefile:655: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:664: test-e2e] Error 2
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:251","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process gracefully exited before 15m0s grace period","severity":"error","time":"2023-01-16T01:14:55Z"}