Recent runs || View in Spyglass
PR | k8s-infra-cherrypick-robot: [release-1.3] Use MSI ClientID as userAssignedIdentityID in azure.json |
Result | FAILURE |
Tests | 1 failed / 1 succeeded |
Started | |
Elapsed | 37m39s |
Revision | 5e2bd4ef4e3663ab1b2db3b5c9bc6ebaa38130a0 |
Refs |
2309 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sAPI\sVersion\sUpgrade\supgrade\sfrom\sv1alpha3\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha3\s\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147 Timed out after 1200.002s. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/controlplane_helpers.go:147from junit.e2e_suite.1.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "clusterctl-upgrade" test spec INFO: Creating namespace clusterctl-upgrade-ijcpjz INFO: Creating event watcher for namespace "clusterctl-upgrade-ijcpjz" �[1mSTEP�[0m: Creating a workload cluster to be used as a new management cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-jtaicd" using the "(default)" template (Kubernetes v1.21.2, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-jtaicd --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/clusterctl-upgrade-jtaicd created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-jtaicd created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-md-0 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-md-win created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-jtaicd-calico created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-jtaicd created configmap/cni-clusterctl-upgrade-jtaicd-calico created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-jtaicd created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-ijcpjz/clusterctl-upgrade-jtaicd-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist �[1mSTEP�[0m: Dumping logs from the "clusterctl-upgrade-jtaicd" workload cluster �[1mSTEP�[0m: Dumping workload cluster clusterctl-upgrade-ijcpjz/clusterctl-upgrade-jtaicd logs May 16 22:36:58.106: INFO: Collecting logs for Linux node clusterctl-upgrade-jtaicd-control-plane-q7rmf in cluster clusterctl-upgrade-jtaicd in namespace clusterctl-upgrade-ijcpjz May 16 22:37:07.161: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-jtaicd-control-plane-q7rmf May 16 22:37:07.958: INFO: Collecting logs for Linux node clusterctl-upgrade-jtaicd-md-0-pjblq in cluster clusterctl-upgrade-jtaicd in namespace clusterctl-upgrade-ijcpjz May 16 22:37:11.121: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-jtaicd-md-0-pjblq �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster with dockershim [OPTIONAL] With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [REQUIRED] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL] with a single control plane node and 1 node
... skipping 494 lines ... ✓ Installing CNI 🔌 • Installing StorageClass 💾 ... ✓ Installing StorageClass 💾 INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind482168509 INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220516220644" INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar2154197201/image.tar": unable to read image data: Error response from daemon: reference does not exist INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar832252432/image.tar": unable to read image data: Error response from daemon: reference does not exist INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar3953179630/image.tar": unable to read image data: Error response from daemon: reference does not exist [1mSTEP[0m: Initializing the bootstrap cluster INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure INFO: Waiting for provider controllers to be running [1mSTEP[0m: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6984cdc687-brcc8, container manager [1mSTEP[0m: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available ... skipping 260 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-5dptx [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-clusterctl-upgrade-3z5sy2-control-plane-2txsm [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-558bd4d5db-q9qng, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-sdldg, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/coredns-558bd4d5db-jx7d5 [1mSTEP[0m: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName." [1mSTEP[0m: Fetching activity logs took 216.306609ms [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterctl-upgrade-ij6npo" namespace [1mSTEP[0m: Deleting cluster clusterctl-upgrade-ij6npo/clusterctl-upgrade-3z5sy2 [1mSTEP[0m: Deleting cluster clusterctl-upgrade-3z5sy2 INFO: Waiting for the Cluster clusterctl-upgrade-ij6npo/clusterctl-upgrade-3z5sy2 to be deleted [1mSTEP[0m: Waiting for cluster clusterctl-upgrade-3z5sy2 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-wkp67, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-967tv, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-q9qng, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-sdldg, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-jx7d5, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5dptx, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-ppx9h, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container etcd: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "clusterctl-upgrade" test spec INFO: Deleting namespace clusterctl-upgrade-ij6npo [1mSTEP[0m: Redacting sensitive information from logs [32m• [SLOW TEST:1676.988 seconds][0m ... skipping 9 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mRunning the Cluster API E2E tests [0m[0mAPI Version Upgrade [0m[90mupgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 [0m[91m[1m[It] Should create a management cluster and then upgrade all the providers [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/controlplane_helpers.go:147[0m [1m[91mRan 2 of 24 Specs in 1894.058 seconds[0m [1m[91mFAIL![0m -- [32m[1m1 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m22 Skipped[0m Ginkgo ran 1 suite in 33m11.871293078s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make[1]: *** [Makefile:628: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:636: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...