This job view page is being replaced by Spyglass soon. Check out the new job view.
PRk8s-infra-cherrypick-robot: [release-1.3] Use MSI ClientID as userAssignedIdentityID in azure.json
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2022-05-16 22:06
Elapsed37m39s
Revision5e2bd4ef4e3663ab1b2db3b5c9bc6ebaa38130a0
Refs 2309

Test Failures


capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 Should create a management cluster and then upgrade all the providers 22m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sAPI\sVersion\sUpgrade\supgrade\sfrom\sv1alpha3\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha3\s\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
Timed out after 1200.002s.
Expected
    <bool>: false
to be true
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/controlplane_helpers.go:147
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 22 Skipped Tests

Error lines from build-log.txt

... skipping 494 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind482168509
INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220516220644"
INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar2154197201/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar832252432/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar3953179630/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6984cdc687-brcc8, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 260 lines ...
STEP: Collecting events for Pod kube-system/kube-proxy-5dptx
STEP: Collecting events for Pod kube-system/kube-scheduler-clusterctl-upgrade-3z5sy2-control-plane-2txsm
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-q9qng, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-sdldg, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container etcd
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-jx7d5
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 216.306609ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-ij6npo" namespace
STEP: Deleting cluster clusterctl-upgrade-ij6npo/clusterctl-upgrade-3z5sy2
STEP: Deleting cluster clusterctl-upgrade-3z5sy2
INFO: Waiting for the Cluster clusterctl-upgrade-ij6npo/clusterctl-upgrade-3z5sy2 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-3z5sy2 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-wkp67, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-967tv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-q9qng, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sdldg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-jx7d5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5dptx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-ppx9h, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-3z5sy2-control-plane-2txsm, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-ij6npo
STEP: Redacting sensitive information from logs


• [SLOW TEST:1676.988 seconds]
... skipping 9 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3  [It] Should create a management cluster and then upgrade all the providers 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/controlplane_helpers.go:147

Ran 2 of 24 Specs in 1894.058 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 22 Skipped


Ginkgo ran 1 suite in 33m11.871293078s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:628: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:636: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...