Recent runs || View in Spyglass
PR | CecileRobertMichon: Switch flavor and test templates to external cloud-provider |
Result | FAILURE |
Tests | 1 failed / 20 succeeded |
Started | |
Elapsed | 45m50s |
Revision | 65957cf999e471834699d80bd2d98beb33399003 |
Refs |
3105 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sAPI\sVersion\sUpgrade\supgrade\sfrom\sv1alpha4\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha4\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
[FAILED] Timed out after 1200.000s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:154 @ 01/27/23 01:07:27.585from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4 created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-md-0 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-md-win created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4-calico-windows created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-3mtzz4 created configmap/cni-clusterctl-upgrade-3mtzz4-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-3mtzz4 created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine clusterctl-upgrade-3mtzz4-md-0-54768d5f75-mkjkz, Cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4: [dialing from control plane to target node at clusterctl-upgrade-3mtzz4-md-0-n6q8r: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/clusterctl-upgrade-3mtzz4-md-0-n6q8r' under resource group 'clusterctl-upgrade-3mtzz4' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/27/23 00:41:51.538 INFO: "" started at Fri, 27 Jan 2023 00:41:51 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/27/23 00:41:51.579 (41ms) > Enter [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:221 @ 01/27/23 00:41:51.579 < Exit [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:221 @ 01/27/23 00:41:51.579 (0s) > Enter [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:167 @ 01/27/23 00:41:51.579 STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 00:41:51.579 INFO: Creating namespace clusterctl-upgrade-jiigwq INFO: Creating event watcher for namespace "clusterctl-upgrade-jiigwq" < Exit [BeforeEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:167 @ 01/27/23 00:41:51.599 (21ms) > Enter [It] Should create a management cluster and then upgrade all the providers - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209 @ 01/27/23 00:41:51.599 STEP: Creating a workload cluster to be used as a new management cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:210 @ 01/27/23 00:41:51.599 INFO: Creating the workload cluster with name "clusterctl-upgrade-3mtzz4" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-3mtzz4 --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Calling PreWaitForCluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/27/23 00:41:54.508 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:45 @ 01/27/23 00:43:34.589 Jan 27 00:45:38.891: INFO: getting history for release cloud-provider-azure-oot Jan 27 00:45:38.949: INFO: Release cloud-provider-azure-oot does not exist, installing it Jan 27 00:45:41.480: INFO: creating 1 resource(s) Jan 27 00:45:42.052: INFO: creating 10 resource(s) Jan 27 00:45:42.493: INFO: Install complete STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/27/23 00:45:42.493 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/27/23 00:45:42.493 Jan 27 00:45:42.568: INFO: getting history for release projectcalico Jan 27 00:45:42.627: INFO: Release projectcalico does not exist, installing it Jan 27 00:45:43.335: INFO: creating 1 resource(s) Jan 27 00:45:43.411: INFO: creating 1 resource(s) Jan 27 00:45:43.482: INFO: creating 1 resource(s) Jan 27 00:45:43.548: INFO: creating 1 resource(s) Jan 27 00:45:43.614: INFO: creating 1 resource(s) Jan 27 00:45:43.681: INFO: creating 1 resource(s) Jan 27 00:45:43.817: INFO: creating 1 resource(s) Jan 27 00:45:43.897: INFO: creating 1 resource(s) Jan 27 00:45:43.962: INFO: creating 1 resource(s) Jan 27 00:45:44.029: INFO: creating 1 resource(s) Jan 27 00:45:44.096: INFO: creating 1 resource(s) Jan 27 00:45:44.160: INFO: creating 1 resource(s) Jan 27 00:45:44.226: INFO: creating 1 resource(s) Jan 27 00:45:44.309: INFO: creating 1 resource(s) Jan 27 00:45:44.384: INFO: creating 1 resource(s) Jan 27 00:45:44.463: INFO: creating 1 resource(s) Jan 27 00:45:44.539: INFO: creating 1 resource(s) Jan 27 00:45:44.612: INFO: creating 1 resource(s) Jan 27 00:45:44.693: INFO: creating 1 resource(s) Jan 27 00:45:44.814: INFO: creating 1 resource(s) Jan 27 00:45:45.158: INFO: creating 1 resource(s) Jan 27 00:45:45.220: INFO: Clearing discovery cache Jan 27 00:45:45.220: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 27 00:45:48.920: INFO: creating 1 resource(s) Jan 27 00:45:49.439: INFO: creating 6 resource(s) Jan 27 00:45:50.171: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/27/23 00:45:50.606 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 00:45:50.848 Jan 27 00:45:50.848: INFO: starting to wait for deployment to become available Jan 27 00:46:00.968: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.120033033s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/27/23 00:46:01.709 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 00:46:02.005 Jan 27 00:46:02.005: INFO: starting to wait for deployment to become available Jan 27 00:47:05.425: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m3.419928029s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 00:47:05.968 Jan 27 00:47:05.968: INFO: starting to wait for deployment to become available Jan 27 00:47:06.310: INFO: Deployment calico-system/calico-typha is now available, took 341.572917ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/27/23 00:47:06.31 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 00:47:06.794 Jan 27 00:47:06.794: INFO: starting to wait for deployment to become available Jan 27 00:47:26.969: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.175116194s STEP: Waiting for Ready cloud-controller-manager deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:55 @ 01/27/23 00:47:26.99 STEP: waiting for deployment kube-system/cloud-controller-manager to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/27/23 00:47:27.508 Jan 27 00:47:27.508: INFO: starting to wait for deployment to become available Jan 27 00:47:27.567: INFO: Deployment kube-system/cloud-controller-manager is now available, took 59.36365ms INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/27/23 00:47:27.585 [FAILED] Timed out after 1200.000s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:154 @ 01/27/23 01:07:27.585 < Exit [It] Should create a management cluster and then upgrade all the providers - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209 @ 01/27/23 01:07:27.585 (25m35.986s) > Enter [AfterEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:489 @ 01/27/23 01:07:27.585 STEP: Dumping logs from the "clusterctl-upgrade-3mtzz4" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 01:07:27.585 Jan 27 01:07:27.585: INFO: Dumping workload cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 logs Jan 27 01:07:27.629: INFO: Collecting logs for Linux node clusterctl-upgrade-3mtzz4-control-plane-bbpl9 in cluster clusterctl-upgrade-3mtzz4 in namespace clusterctl-upgrade-jiigwq Jan 27 01:07:41.174: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:07:42.427: INFO: Collecting logs for Linux node clusterctl-upgrade-3mtzz4-md-0-n6q8r in cluster clusterctl-upgrade-3mtzz4 in namespace clusterctl-upgrade-jiigwq Jan 27 01:08:46.688: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-3mtzz4-md-0-n6q8r Jan 27 01:08:47.064: INFO: Dumping workload cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 kube-system pod logs Jan 27 01:08:47.780: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-6c99cd59c7-48h2j, container calico-apiserver Jan 27 01:08:47.781: INFO: Describing Pod calico-apiserver/calico-apiserver-6c99cd59c7-48h2j Jan 27 01:08:47.896: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-6c99cd59c7-dc6w2, container calico-apiserver Jan 27 01:08:47.896: INFO: Describing Pod calico-apiserver/calico-apiserver-6c99cd59c7-dc6w2 Jan 27 01:08:48.014: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-8fdfc695-cqccp, container calico-kube-controllers Jan 27 01:08:48.014: INFO: Describing Pod calico-system/calico-kube-controllers-8fdfc695-cqccp Jan 27 01:08:48.149: INFO: Creating log watcher for controller calico-system/calico-node-5krs6, container calico-node Jan 27 01:08:48.150: INFO: Describing Pod calico-system/calico-node-5krs6 Jan 27 01:08:48.211: INFO: Error starting logs stream for pod calico-system/calico-node-5krs6, container calico-node: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:48.275: INFO: Creating log watcher for controller calico-system/calico-node-sbv5r, container calico-node Jan 27 01:08:48.275: INFO: Describing Pod calico-system/calico-node-sbv5r Jan 27 01:08:48.409: INFO: Creating log watcher for controller calico-system/calico-typha-777988cb5b-stdx2, container calico-typha Jan 27 01:08:48.409: INFO: Describing Pod calico-system/calico-typha-777988cb5b-stdx2 Jan 27 01:08:48.525: INFO: Describing Pod calico-system/csi-node-driver-gtl52 Jan 27 01:08:48.525: INFO: Creating log watcher for controller calico-system/csi-node-driver-gtl52, container calico-csi Jan 27 01:08:48.526: INFO: Creating log watcher for controller calico-system/csi-node-driver-gtl52, container csi-node-driver-registrar Jan 27 01:08:48.921: INFO: Describing Pod calico-system/csi-node-driver-srlfc Jan 27 01:08:48.921: INFO: Creating log watcher for controller calico-system/csi-node-driver-srlfc, container calico-csi Jan 27 01:08:48.921: INFO: Creating log watcher for controller calico-system/csi-node-driver-srlfc, container csi-node-driver-registrar Jan 27 01:08:48.980: INFO: Error starting logs stream for pod calico-system/csi-node-driver-srlfc, container csi-node-driver-registrar: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:48.980: INFO: Error starting logs stream for pod calico-system/csi-node-driver-srlfc, container calico-csi: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:49.319: INFO: Creating log watcher for controller kube-system/cloud-controller-manager-6dcd4c6dd6-fhtvj, container cloud-controller-manager Jan 27 01:08:49.319: INFO: Describing Pod kube-system/cloud-controller-manager-6dcd4c6dd6-fhtvj Jan 27 01:08:49.719: INFO: Creating log watcher for controller kube-system/cloud-node-manager-4tpkh, container cloud-node-manager Jan 27 01:08:49.719: INFO: Describing Pod kube-system/cloud-node-manager-4tpkh Jan 27 01:08:50.119: INFO: Creating log watcher for controller kube-system/cloud-node-manager-md6np, container cloud-node-manager Jan 27 01:08:50.119: INFO: Describing Pod kube-system/cloud-node-manager-md6np Jan 27 01:08:50.177: INFO: Error starting logs stream for pod kube-system/cloud-node-manager-md6np, container cloud-node-manager: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:50.520: INFO: Creating log watcher for controller kube-system/coredns-78fcd69978-5cnd6, container coredns Jan 27 01:08:50.520: INFO: Describing Pod kube-system/coredns-78fcd69978-5cnd6 Jan 27 01:08:50.919: INFO: Creating log watcher for controller kube-system/coredns-78fcd69978-fh95q, container coredns Jan 27 01:08:50.919: INFO: Describing Pod kube-system/coredns-78fcd69978-fh95q Jan 27 01:08:51.319: INFO: Describing Pod kube-system/etcd-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:51.319: INFO: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container etcd Jan 27 01:08:51.719: INFO: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container kube-apiserver Jan 27 01:08:51.719: INFO: Describing Pod kube-system/kube-apiserver-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:52.117: INFO: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container kube-controller-manager Jan 27 01:08:52.117: INFO: Describing Pod kube-system/kube-controller-manager-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:52.519: INFO: Describing Pod kube-system/kube-proxy-jf7kr Jan 27 01:08:52.519: INFO: Creating log watcher for controller kube-system/kube-proxy-jf7kr, container kube-proxy Jan 27 01:08:52.581: INFO: Error starting logs stream for pod kube-system/kube-proxy-jf7kr, container kube-proxy: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:52.923: INFO: Creating log watcher for controller kube-system/kube-proxy-xv6qn, container kube-proxy Jan 27 01:08:52.923: INFO: Describing Pod kube-system/kube-proxy-xv6qn Jan 27 01:08:53.318: INFO: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container kube-scheduler Jan 27 01:08:53.318: INFO: Describing Pod kube-system/kube-scheduler-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:53.718: INFO: Fetching kube-system pod logs took 6.654387142s Jan 27 01:08:53.718: INFO: Dumping workload cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 Azure activity log Jan 27 01:08:53.718: INFO: Creating log watcher for controller tigera-operator/tigera-operator-cffd8458f-csltt, container tigera-operator Jan 27 01:08:53.718: INFO: Describing Pod tigera-operator/tigera-operator-cffd8458f-csltt Jan 27 01:09:22.298: INFO: Fetching activity logs took 28.57941246s STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-jiigwq" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 01:09:22.298 STEP: Deleting cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 01:09:22.622 STEP: Deleting cluster clusterctl-upgrade-3mtzz4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/27/23 01:09:22.646 INFO: Waiting for the Cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 to be deleted STEP: Waiting for cluster clusterctl-upgrade-3mtzz4 to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/27/23 01:09:22.663 STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/27/23 01:15:12.843 INFO: Deleting namespace clusterctl-upgrade-jiigwq < Exit [AfterEach] upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:489 @ 01/27/23 01:15:12.86 (7m45.275s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/27/23 01:15:12.86 Jan 27 01:15:12.861: INFO: FAILED! Jan 27 01:15:12.861: INFO: Cleaning up after "Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/27/23 01:15:12.861 INFO: "Should create a management cluster and then upgrade all the providers" started at Fri, 27 Jan 2023 01:15:20 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/27/23 01:15:20.62 (7.76s)
Filter through log files | View test history on testgrid
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a cluster that uses the intree cloud provider [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
... skipping 642 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [2009.082 seconds][0m [0mRunning the Cluster API E2E tests [38;5;243mAPI Version Upgrade [0mupgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 [38;5;9m[1m[It] Should create a management cluster and then upgrade all the providers[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/clusterctl_upgrade.go:209[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m cluster.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4 created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-3mtzz4 created ... skipping 13 lines ... configmap/cni-clusterctl-upgrade-3mtzz4-calico-windows created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-3mtzz4 created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine clusterctl-upgrade-3mtzz4-md-0-54768d5f75-mkjkz, Cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4: [dialing from control plane to target node at clusterctl-upgrade-3mtzz4-md-0-n6q8r: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/clusterctl-upgrade-3mtzz4-md-0-n6q8r' under resource group 'clusterctl-upgrade-3mtzz4' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Fri, 27 Jan 2023 00:41:51 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating a namespace for hosting the "clusterctl-upgrade" test spec [38;5;243m@ 01/27/23 00:41:51.579[0m INFO: Creating namespace clusterctl-upgrade-jiigwq ... skipping 61 lines ... [1mSTEP:[0m Waiting for Ready cloud-controller-manager deployment pods [38;5;243m@ 01/27/23 00:47:26.99[0m [1mSTEP:[0m waiting for deployment kube-system/cloud-controller-manager to be available [38;5;243m@ 01/27/23 00:47:27.508[0m Jan 27 00:47:27.508: INFO: starting to wait for deployment to become available Jan 27 00:47:27.567: INFO: Deployment kube-system/cloud-controller-manager is now available, took 59.36365ms INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4-control-plane to be provisioned [1mSTEP:[0m Waiting for one control plane node to exist [38;5;243m@ 01/27/23 00:47:27.585[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:154 [38;5;243m@ 01/27/23 01:07:27.585[0m [1mSTEP:[0m Dumping logs from the "clusterctl-upgrade-3mtzz4" workload cluster [38;5;243m@ 01/27/23 01:07:27.585[0m Jan 27 01:07:27.585: INFO: Dumping workload cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 logs Jan 27 01:07:27.629: INFO: Collecting logs for Linux node clusterctl-upgrade-3mtzz4-control-plane-bbpl9 in cluster clusterctl-upgrade-3mtzz4 in namespace clusterctl-upgrade-jiigwq Jan 27 01:07:41.174: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-3mtzz4-control-plane-bbpl9 ... skipping 7 lines ... Jan 27 01:08:47.896: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-6c99cd59c7-dc6w2, container calico-apiserver Jan 27 01:08:47.896: INFO: Describing Pod calico-apiserver/calico-apiserver-6c99cd59c7-dc6w2 Jan 27 01:08:48.014: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-8fdfc695-cqccp, container calico-kube-controllers Jan 27 01:08:48.014: INFO: Describing Pod calico-system/calico-kube-controllers-8fdfc695-cqccp Jan 27 01:08:48.149: INFO: Creating log watcher for controller calico-system/calico-node-5krs6, container calico-node Jan 27 01:08:48.150: INFO: Describing Pod calico-system/calico-node-5krs6 Jan 27 01:08:48.211: INFO: Error starting logs stream for pod calico-system/calico-node-5krs6, container calico-node: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:48.275: INFO: Creating log watcher for controller calico-system/calico-node-sbv5r, container calico-node Jan 27 01:08:48.275: INFO: Describing Pod calico-system/calico-node-sbv5r Jan 27 01:08:48.409: INFO: Creating log watcher for controller calico-system/calico-typha-777988cb5b-stdx2, container calico-typha Jan 27 01:08:48.409: INFO: Describing Pod calico-system/calico-typha-777988cb5b-stdx2 Jan 27 01:08:48.525: INFO: Describing Pod calico-system/csi-node-driver-gtl52 Jan 27 01:08:48.525: INFO: Creating log watcher for controller calico-system/csi-node-driver-gtl52, container calico-csi Jan 27 01:08:48.526: INFO: Creating log watcher for controller calico-system/csi-node-driver-gtl52, container csi-node-driver-registrar Jan 27 01:08:48.921: INFO: Describing Pod calico-system/csi-node-driver-srlfc Jan 27 01:08:48.921: INFO: Creating log watcher for controller calico-system/csi-node-driver-srlfc, container calico-csi Jan 27 01:08:48.921: INFO: Creating log watcher for controller calico-system/csi-node-driver-srlfc, container csi-node-driver-registrar Jan 27 01:08:48.980: INFO: Error starting logs stream for pod calico-system/csi-node-driver-srlfc, container csi-node-driver-registrar: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:48.980: INFO: Error starting logs stream for pod calico-system/csi-node-driver-srlfc, container calico-csi: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:49.319: INFO: Creating log watcher for controller kube-system/cloud-controller-manager-6dcd4c6dd6-fhtvj, container cloud-controller-manager Jan 27 01:08:49.319: INFO: Describing Pod kube-system/cloud-controller-manager-6dcd4c6dd6-fhtvj Jan 27 01:08:49.719: INFO: Creating log watcher for controller kube-system/cloud-node-manager-4tpkh, container cloud-node-manager Jan 27 01:08:49.719: INFO: Describing Pod kube-system/cloud-node-manager-4tpkh Jan 27 01:08:50.119: INFO: Creating log watcher for controller kube-system/cloud-node-manager-md6np, container cloud-node-manager Jan 27 01:08:50.119: INFO: Describing Pod kube-system/cloud-node-manager-md6np Jan 27 01:08:50.177: INFO: Error starting logs stream for pod kube-system/cloud-node-manager-md6np, container cloud-node-manager: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:50.520: INFO: Creating log watcher for controller kube-system/coredns-78fcd69978-5cnd6, container coredns Jan 27 01:08:50.520: INFO: Describing Pod kube-system/coredns-78fcd69978-5cnd6 Jan 27 01:08:50.919: INFO: Creating log watcher for controller kube-system/coredns-78fcd69978-fh95q, container coredns Jan 27 01:08:50.919: INFO: Describing Pod kube-system/coredns-78fcd69978-fh95q Jan 27 01:08:51.319: INFO: Describing Pod kube-system/etcd-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:51.319: INFO: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container etcd Jan 27 01:08:51.719: INFO: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container kube-apiserver Jan 27 01:08:51.719: INFO: Describing Pod kube-system/kube-apiserver-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:52.117: INFO: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container kube-controller-manager Jan 27 01:08:52.117: INFO: Describing Pod kube-system/kube-controller-manager-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:52.519: INFO: Describing Pod kube-system/kube-proxy-jf7kr Jan 27 01:08:52.519: INFO: Creating log watcher for controller kube-system/kube-proxy-jf7kr, container kube-proxy Jan 27 01:08:52.581: INFO: Error starting logs stream for pod kube-system/kube-proxy-jf7kr, container kube-proxy: pods "clusterctl-upgrade-3mtzz4-md-0-n6q8r" not found Jan 27 01:08:52.923: INFO: Creating log watcher for controller kube-system/kube-proxy-xv6qn, container kube-proxy Jan 27 01:08:52.923: INFO: Describing Pod kube-system/kube-proxy-xv6qn Jan 27 01:08:53.318: INFO: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-3mtzz4-control-plane-bbpl9, container kube-scheduler Jan 27 01:08:53.318: INFO: Describing Pod kube-system/kube-scheduler-clusterctl-upgrade-3mtzz4-control-plane-bbpl9 Jan 27 01:08:53.718: INFO: Fetching kube-system pod logs took 6.654387142s Jan 27 01:08:53.718: INFO: Dumping workload cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 Azure activity log ... skipping 4 lines ... [1mSTEP:[0m Deleting cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 [38;5;243m@ 01/27/23 01:09:22.622[0m [1mSTEP:[0m Deleting cluster clusterctl-upgrade-3mtzz4 [38;5;243m@ 01/27/23 01:09:22.646[0m INFO: Waiting for the Cluster clusterctl-upgrade-jiigwq/clusterctl-upgrade-3mtzz4 to be deleted [1mSTEP:[0m Waiting for cluster clusterctl-upgrade-3mtzz4 to be deleted [38;5;243m@ 01/27/23 01:09:22.663[0m [1mSTEP:[0m Deleting namespace used for hosting the "clusterctl-upgrade" test spec [38;5;243m@ 01/27/23 01:15:12.843[0m INFO: Deleting namespace clusterctl-upgrade-jiigwq Jan 27 01:15:12.861: INFO: FAILED! Jan 27 01:15:12.861: INFO: Cleaning up after "Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers" spec [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/27/23 01:15:12.861[0m INFO: "Should create a management cluster and then upgrade all the providers" started at Fri, 27 Jan 2023 01:15:20 UTC on Ginkgo node 1 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 1200.000s. No Control Plane machines came into existence. Expected <bool>: false to be true[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:154[0m [38;5;243m@ 01/27/23 01:07:27.585[0m ... skipping 22 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.013 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mRunning the Cluster API E2E tests [38;5;243mAPI Version Upgrade [0mupgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 [38;5;9m[1m[It] Should create a management cluster and then upgrade all the providers[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:154[0m [38;5;9m[1mRan 1 of 24 Specs in 2158.829 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m23 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:426[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:282[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:426[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.7.0[0m --- FAIL: TestE2E (2158.83s) FAIL Ginkgo ran 1 suite in 39m47.79172493s Test Suite Failed make[1]: *** [Makefile:654: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:663: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...