Recent runs || View in Spyglass
PR | CecileRobertMichon: Switch flavor and test templates to external cloud-provider |
Result | FAILURE |
Tests | 1 failed / 20 succeeded |
Started | |
Elapsed | 22m2s |
Revision | fc6484ce9b345584116190b85d25d5b82a33222e |
Refs |
3105 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sAPI\sVersion\sUpgrade\supgrade\sfrom\sv1alpha4\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha4\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
[FAILED] Timed out after 195.010s. Expected success, but got an error: <*errors.withStack | 0xc0001fc9c0>: { error: <*errors.withMessage | 0xc000204d60>{ cause: <*url.Error | 0xc000667530>{ Op: "Get", URL: "https://clusterctl-upgrade-0yoz9u-676d472d.canadacentral.cloudapp.azure.com:6443/version", Err: <*net.OpError | 0xc001b5dc20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000d0b410>{ IP: [20, 220, 44, 50], Port: 6443, Zone: "", }, Err: <*net.timeoutError | 0x5d04820>{}, }, }, msg: "Kubernetes cluster unreachable", }, stack: [0x3547885, 0x35ea4fb, 0x3642752, 0x154d085, 0x154c57c, 0x196b29a, 0x196c657, 0x196964d, 0x3641fe9, 0x3631f54, 0x36354e5, 0x2ff0810, 0x3417e48, 0x194637b, 0x195a958, 0x14da741], } Kubernetes cluster unreachable: Get "https://clusterctl-upgrade-0yoz9u-676d472d.canadacentral.cloudapp.azure.com:6443/version": dial tcp 20.220.44.50:6443: i/o timeout In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949 @ 01/27/23 21:55:03.49from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-md-0 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-md-win created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u-calico-windows created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-0yoz9u created configmap/cni-clusterctl-upgrade-0yoz9u-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-0yoz9u created Failed to get logs for Machine clusterctl-upgrade-0yoz9u-md-0-5d598d9959-ttp2d, Cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u: [dialing from control plane to target node at clusterctl-upgrade-0yoz9u-md-0-cf5gf: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/27/23 21:49:45.088 INFO: "" started at Fri, 27 Jan 2023 21:49:45 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/27/23 21:49:45.124 (36ms) > Enter [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:221 @ 01/27/23 21:49:45.124 < Exit [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:221 @ 01/27/23 21:49:45.124 (0s) > Enter [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:167 @ 01/27/23 21:49:45.124 STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 21:49:45.125 INFO: Creating namespace clusterctl-upgrade-yslhyk INFO: Creating event watcher for namespace "clusterctl-upgrade-yslhyk" < Exit [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:167 @ 01/27/23 21:49:45.151 (26ms) > Enter [It] Should create a management cluster and then upgrade all the providers - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209 @ 01/27/23 21:49:45.151 STEP: Creating a workload cluster to be used as a new management cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:210 @ 01/27/23 21:49:45.151 INFO: Creating the workload cluster with name "clusterctl-upgrade-0yoz9u" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-0yoz9u --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Calling PreWaitForCluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/27/23 21:49:48.365 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:45 @ 01/27/23 21:51:48.459 [FAILED] Timed out after 195.010s. Expected success, but got an error: <*errors.withStack | 0xc0001fc9c0>: { error: <*errors.withMessage | 0xc000204d60>{ cause: <*url.Error | 0xc000667530>{ Op: "Get", URL: "https://clusterctl-upgrade-0yoz9u-676d472d.canadacentral.cloudapp.azure.com:6443/version", Err: <*net.OpError | 0xc001b5dc20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000d0b410>{ IP: [20, 220, 44, 50], Port: 6443, Zone: "", }, Err: <*net.timeoutError | 0x5d04820>{}, }, }, msg: "Kubernetes cluster unreachable", }, stack: [0x3547885, 0x35ea4fb, 0x3642752, 0x154d085, 0x154c57c, 0x196b29a, 0x196c657, 0x196964d, 0x3641fe9, 0x3631f54, 0x36354e5, 0x2ff0810, 0x3417e48, 0x194637b, 0x195a958, 0x14da741], } Kubernetes cluster unreachable: Get "https://clusterctl-upgrade-0yoz9u-676d472d.canadacentral.cloudapp.azure.com:6443/version": dial tcp 20.220.44.50:6443: i/o timeout In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949 @ 01/27/23 21:55:03.49 < Exit [It] Should create a management cluster and then upgrade all the providers - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209 @ 01/27/23 21:55:03.49 (5m18.339s) > Enter [AfterEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:489 @ 01/27/23 21:55:03.49 STEP: Dumping logs from the "clusterctl-upgrade-0yoz9u" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 21:55:03.49 Jan 27 21:55:03.490: INFO: Dumping workload cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u logs Jan 27 21:55:03.532: INFO: Collecting logs for Linux node clusterctl-upgrade-0yoz9u-control-plane-mdf9x in cluster clusterctl-upgrade-0yoz9u in namespace clusterctl-upgrade-yslhyk Jan 27 21:55:14.145: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-0yoz9u-control-plane-mdf9x Jan 27 21:55:15.099: INFO: Collecting logs for Linux node clusterctl-upgrade-0yoz9u-md-0-cf5gf in cluster clusterctl-upgrade-0yoz9u in namespace clusterctl-upgrade-yslhyk Jan 27 21:56:18.273: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-0yoz9u-md-0-cf5gf Jan 27 21:56:18.288: INFO: Dumping workload cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u kube-system pod logs Jan 27 21:56:18.654: INFO: Creating log watcher for controller kube-system/coredns-78fcd69978-dbt6b, container coredns Jan 27 21:56:18.654: INFO: Describing Pod kube-system/coredns-78fcd69978-dbt6b Jan 27 21:56:18.723: INFO: Creating log watcher for controller kube-system/coredns-78fcd69978-ngk8t, container coredns Jan 27 21:56:18.724: INFO: Describing Pod kube-system/coredns-78fcd69978-ngk8t Jan 27 21:56:18.796: INFO: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-0yoz9u-control-plane-mdf9x, container etcd Jan 27 21:56:18.796: INFO: Describing Pod kube-system/etcd-clusterctl-upgrade-0yoz9u-control-plane-mdf9x Jan 27 21:56:18.867: INFO: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-0yoz9u-control-plane-mdf9x, container kube-apiserver Jan 27 21:56:18.869: INFO: Describing Pod kube-system/kube-apiserver-clusterctl-upgrade-0yoz9u-control-plane-mdf9x Jan 27 21:56:18.940: INFO: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-0yoz9u-control-plane-mdf9x, container kube-controller-manager Jan 27 21:56:18.940: INFO: Describing Pod kube-system/kube-controller-manager-clusterctl-upgrade-0yoz9u-control-plane-mdf9x Jan 27 21:56:19.046: INFO: Describing Pod kube-system/kube-proxy-c24rs Jan 27 21:56:19.046: INFO: Creating log watcher for controller kube-system/kube-proxy-c24rs, container kube-proxy Jan 27 21:56:19.415: INFO: Fetching kube-system pod logs took 1.127648401s Jan 27 21:56:19.415: INFO: Dumping workload cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u Azure activity log Jan 27 21:56:19.415: INFO: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-0yoz9u-control-plane-mdf9x, container kube-scheduler Jan 27 21:56:19.416: INFO: Describing Pod kube-system/kube-scheduler-clusterctl-upgrade-0yoz9u-control-plane-mdf9x Jan 27 21:56:21.324: INFO: Fetching activity logs took 1.908139571s STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-yslhyk" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 21:56:21.324 STEP: Deleting cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 21:56:21.714 STEP: Deleting cluster clusterctl-upgrade-0yoz9u - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/27/23 21:56:21.733 INFO: Waiting for the Cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u to be deleted STEP: Waiting for cluster clusterctl-upgrade-0yoz9u to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/27/23 21:56:21.749 STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 22:00:11.921 INFO: Deleting namespace clusterctl-upgrade-yslhyk < Exit [AfterEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:489 @ 01/27/23 22:00:11.954 (5m8.463s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/27/23 22:00:11.954 Jan 27 22:00:11.954: INFO: FAILED! Jan 27 22:00:11.954: INFO: Cleaning up after "Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/27/23 22:00:11.954 INFO: "Should create a management cluster and then upgrade all the providers" started at Fri, 27 Jan 2023 22:00:19 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/27/23 22:00:19.627 (7.673s)
Filter through log files | View test history on testgrid
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the intree cloud provider [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster with VMSS flex machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 656 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [634.539 seconds][0m [0mRunning the Cluster API E2E tests [38;5;243mAPI Version Upgrade [0mupgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 [38;5;9m[1m[It] Should create a management cluster and then upgrade all the providers[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m cluster.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-0yoz9u created ... skipping 11 lines ... clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-0yoz9u created configmap/cni-clusterctl-upgrade-0yoz9u-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-0yoz9u created Failed to get logs for Machine clusterctl-upgrade-0yoz9u-md-0-5d598d9959-ttp2d, Cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u: [dialing from control plane to target node at clusterctl-upgrade-0yoz9u-md-0-cf5gf: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Fri, 27 Jan 2023 21:49:45 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "clusterctl-upgrade" test spec [38;5;243m@ 01/27/23 21:49:45.125[0m INFO: Creating namespace clusterctl-upgrade-yslhyk ... skipping 5 lines ... INFO: Applying the cluster template yaml to the cluster INFO: Calling PreWaitForCluster INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP:[0m Waiting for cluster to enter the provisioned phase [38;5;243m@ 01/27/23 21:49:48.365[0m INFO: Waiting for control plane to be initialized [1mSTEP:[0m Installing cloud-provider-azure components via helm [38;5;243m@ 01/27/23 21:51:48.459[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949 [38;5;243m@ 01/27/23 21:55:03.49[0m [1mSTEP:[0m Dumping logs from the "clusterctl-upgrade-0yoz9u" workload cluster [38;5;243m@ 01/27/23 21:55:03.49[0m Jan 27 21:55:03.490: INFO: Dumping workload cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u logs Jan 27 21:55:03.532: INFO: Collecting logs for Linux node clusterctl-upgrade-0yoz9u-control-plane-mdf9x in cluster clusterctl-upgrade-0yoz9u in namespace clusterctl-upgrade-yslhyk Jan 27 21:55:14.145: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-0yoz9u-control-plane-mdf9x ... skipping 23 lines ... [1mSTEP:[0m Deleting cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u [38;5;243m@ 01/27/23 21:56:21.714[0m [1mSTEP:[0m Deleting cluster clusterctl-upgrade-0yoz9u [38;5;243m@ 01/27/23 21:56:21.733[0m INFO: Waiting for the Cluster clusterctl-upgrade-yslhyk/clusterctl-upgrade-0yoz9u to be deleted [1mSTEP:[0m Waiting for cluster clusterctl-upgrade-0yoz9u to be deleted [38;5;243m@ 01/27/23 21:56:21.749[0m [1mSTEP:[0m Deleting namespace used for hosting the "clusterctl-upgrade" test spec [38;5;243m@ 01/27/23 22:00:11.921[0m INFO: Deleting namespace clusterctl-upgrade-yslhyk Jan 27 22:00:11.954: INFO: FAILED! Jan 27 22:00:11.954: INFO: Cleaning up after "Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers" spec [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/27/23 22:00:11.954[0m INFO: "Should create a management cluster and then upgrade all the providers" started at Fri, 27 Jan 2023 22:00:19 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 195.010s. Expected success, but got an error: <*errors.withStack | 0xc0001fc9c0>: { error: <*errors.withMessage | 0xc000204d60>{ cause: <*url.Error | 0xc000667530>{ Op: "Get", URL: "https://clusterctl-upgrade-0yoz9u-676d472d.canadacentral.cloudapp.azure.com:6443/version", Err: <*net.OpError | 0xc001b5dc20>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 35 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.019 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mRunning the Cluster API E2E tests [38;5;243mAPI Version Upgrade [0mupgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 [38;5;9m[1m[It] Should create a management cluster and then upgrade all the providers[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949[0m [38;5;9m[1mRan 1 of 24 Specs in 800.077 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m23 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:427[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:282[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:427[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.7.0[0m --- FAIL: TestE2E (800.07s) FAIL Ginkgo ran 1 suite in 16m48.114026069s Test Suite Failed make[1]: *** [Makefile:654: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:663: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...