Recent runs || View in Spyglass
PR | CecileRobertMichon: Switch flavor and test templates to external cloud-provider |
Result | ABORTED |
Tests | 1 failed / 20 succeeded |
Started | |
Elapsed | 24m25s |
Revision | aab820e59d56af8ade1d9153e45a7205c3059993 |
Refs |
3105 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sAPI\sVersion\sUpgrade\supgrade\sfrom\sv1alpha4\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha4\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
[FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc00072c780>: { Op: "Get", URL: "https://127.0.0.1:34099/api?timeout=32s", Err: <*net.OpError | 0xc002719180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00072c750>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34099, Zone: "", }, Err: <*os.SyscallError | 0xc001e29080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Get "https://127.0.0.1:34099/api?timeout=32s": dial tcp 127.0.0.1:34099: connect: connection refused occurred In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_proxy.go:193 @ 01/27/23 21:41:37.64 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/clusterctl-upgrade-8x0ife created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-8x0ife created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-md-0 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-md-win created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-8x0ife-calico-windows created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-8x0ife created configmap/cni-clusterctl-upgrade-8x0ife-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-8x0ife created felixconfiguration.crd.projectcalico.org/default created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/27/23 21:31:35.023 INFO: "" started at Fri, 27 Jan 2023 21:31:35 UTC on Ginkgo node 8 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/27/23 21:31:35.074 (51ms) > Enter [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:221 @ 01/27/23 21:31:35.074 < Exit [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:221 @ 01/27/23 21:31:35.074 (0s) > Enter [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:167 @ 01/27/23 21:31:35.074 STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 21:31:35.074 INFO: Creating namespace clusterctl-upgrade-me8jl3 INFO: Creating event watcher for namespace "clusterctl-upgrade-me8jl3" < Exit [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:167 @ 01/27/23 21:31:35.103 (29ms) > Enter [It] Should create a management cluster and then upgrade all the providers - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209 @ 01/27/23 21:31:35.103 STEP: Creating a workload cluster to be used as a new management cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:210 @ 01/27/23 21:31:35.103 INFO: Creating the workload cluster with name "clusterctl-upgrade-8x0ife" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-8x0ife --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Calling PreWaitForCluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/27/23 21:31:38.537 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:45 @ 01/27/23 21:33:58.655 Jan 27 21:36:14.284: INFO: getting history for release cloud-provider-azure-oot Jan 27 21:36:14.394: INFO: Release cloud-provider-azure-oot does not exist, installing it Jan 27 21:36:17.484: INFO: creating 1 resource(s) Jan 27 21:36:17.743: INFO: creating 10 resource(s) Jan 27 21:36:18.549: INFO: Install complete STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/27/23 21:36:18.549 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/27/23 21:36:18.549 Jan 27 21:36:18.676: INFO: getting history for release projectcalico Jan 27 21:36:18.786: INFO: Release projectcalico does not exist, installing it Jan 27 21:36:19.643: INFO: creating 1 resource(s) Jan 27 21:36:19.786: INFO: creating 1 resource(s) Jan 27 21:36:19.907: INFO: creating 1 resource(s) Jan 27 21:36:20.027: INFO: creating 1 resource(s) Jan 27 21:36:20.152: INFO: creating 1 resource(s) Jan 27 21:36:20.271: INFO: creating 1 resource(s) Jan 27 21:36:20.519: INFO: creating 1 resource(s) Jan 27 21:36:20.653: INFO: creating 1 resource(s) Jan 27 21:36:20.770: INFO: creating 1 resource(s) Jan 27 21:36:20.886: INFO: creating 1 resource(s) Jan 27 21:36:21.010: INFO: creating 1 resource(s) Jan 27 21:36:21.127: INFO: creating 1 resource(s) Jan 27 21:36:21.246: INFO: creating 1 resource(s) Jan 27 21:36:21.363: INFO: creating 1 resource(s) Jan 27 21:36:21.480: INFO: creating 1 resource(s) Jan 27 21:36:21.618: INFO: creating 1 resource(s) Jan 27 21:36:21.750: INFO: creating 1 resource(s) Jan 27 21:36:21.878: INFO: creating 1 resource(s) Jan 27 21:36:22.022: INFO: creating 1 resource(s) Jan 27 21:36:22.210: INFO: creating 1 resource(s) Jan 27 21:36:22.786: INFO: creating 1 resource(s) Jan 27 21:36:22.957: INFO: Clearing discovery cache Jan 27 21:36:22.957: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 27 21:36:28.045: INFO: creating 1 resource(s) Jan 27 21:36:28.769: INFO: creating 6 resource(s) Jan 27 21:36:30.046: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/27/23 21:36:30.813 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 21:36:31.255 Jan 27 21:36:31.255: INFO: starting to wait for deployment to become available Jan 27 21:36:41.473: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.217759745s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/27/23 21:36:42.705 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 21:37:13.581 Jan 27 21:37:13.581: INFO: starting to wait for deployment to become available Jan 27 21:38:14.734: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m1.152308709s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 21:38:15.726 Jan 27 21:38:15.726: INFO: starting to wait for deployment to become available Jan 27 21:38:15.837: INFO: Deployment calico-system/calico-typha is now available, took 111.930786ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/27/23 21:38:15.837 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 21:38:16.614 Jan 27 21:38:16.614: INFO: starting to wait for deployment to become available Jan 27 21:38:36.977: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.363145586s STEP: Waiting for Ready cloud-controller-manager deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:55 @ 01/27/23 21:38:36.977 STEP: waiting for deployment kube-system/cloud-controller-manager to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 21:38:37.527 Jan 27 21:38:37.527: INFO: starting to wait for deployment to become available Jan 27 21:38:37.637: INFO: Deployment kube-system/cloud-controller-manager is now available, took 109.753502ms [FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc00072c780>: { Op: "Get", URL: "https://127.0.0.1:34099/api?timeout=32s", Err: <*net.OpError | 0xc002719180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00072c750>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34099, Zone: "", }, Err: <*os.SyscallError | 0xc001e29080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Get "https://127.0.0.1:34099/api?timeout=32s": dial tcp 127.0.0.1:34099: connect: connection refused occurred In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_proxy.go:193 @ 01/27/23 21:41:37.64 < Exit [It] Should create a management cluster and then upgrade all the providers - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209 @ 01/27/23 21:41:37.64 (10m2.537s) > Enter [AfterEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:489 @ 01/27/23 21:41:37.64 STEP: Dumping logs from the "clusterctl-upgrade-8x0ife" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 21:41:37.64 Jan 27 21:41:37.640: INFO: Dumping workload cluster clusterctl-upgrade-me8jl3/clusterctl-upgrade-8x0ife logs [FAILED] Failed to get controller-runtime client Unexpected error: <*url.Error | 0xc0029e8c30>: { Op: "Get", URL: "https://127.0.0.1:34099/api?timeout=32s", Err: <*net.OpError | 0xc000164f50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002695140>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34099, Zone: "", }, Err: <*os.SyscallError | 0xc0023482c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Get "https://127.0.0.1:34099/api?timeout=32s": dial tcp 127.0.0.1:34099: connect: connection refused occurred In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_proxy.go:193 @ 01/27/23 21:44:37.645 < Exit [AfterEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:489 @ 01/27/23 21:44:37.645 (3m0.006s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/27/23 21:44:37.645 Jan 27 21:44:37.645: INFO: FAILED! Jan 27 21:44:37.645: INFO: Cleaning up after "Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/27/23 21:44:37.645 INFO: "Should create a management cluster and then upgrade all the providers" started at Fri, 27 Jan 2023 21:44:41 UTC on Ginkgo node 8 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/27/23 21:44:41.434 (3.789s)
Filter through log files | View test history on testgrid
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the intree cloud provider [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster with VMSS flex machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node