Recent runs || View in Spyglass
PR | lzhecheng: [release-1.3] Support using a customized template outside CAPZ repo |
Result | FAILURE |
Tests | 1 failed / 1 succeeded |
Started | |
Elapsed | 43m42s |
Revision | ceee78752d3cb1e00bdcbeee144862b555f87103 |
Refs |
2310 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sAPI\sVersion\sUpgrade\supgrade\sfrom\sv1alpha3\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha3\s\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147 Expected success, but got an error: <*errors.withStack | 0xc0004ab128>: { error: <*exec.ExitError | 0xc000f9bfc0>{ ProcessState: { pid: 34458, status: 256, rusage: { Utime: {Sec: 0, Usec: 430056}, Stime: {Sec: 0, Usec: 214828}, Maxrss: 95636, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 12672, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25192, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 4872, Nivcsw: 403, }, }, Stderr: nil, }, stack: [0x2539955, 0x2539e7d, 0x26db52c, 0x2c2da0f, 0x15dee9a, 0x15de865, 0x15dd8fb, 0x15e41c9, 0x15e3ba7, 0x15f0f65, 0x15f0c85, 0x15f04c5, 0x15f27f2, 0x15ffd25, 0x15ffb3e, 0x2f913de, 0x1322e82, 0x125fb41], } exit status 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:272from junit.e2e_suite.2.xml
�[1mSTEP�[0m: Creating a namespace for hosting the "clusterctl-upgrade" test spec INFO: Creating namespace clusterctl-upgrade-kklo8o INFO: Creating event watcher for namespace "clusterctl-upgrade-kklo8o" �[1mSTEP�[0m: Creating a workload cluster to be used as a new management cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-ah9et1" using the "(default)" template (Kubernetes v1.21.2, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-ah9et1 --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/clusterctl-upgrade-ah9et1 created azurecluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-control-plane created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-0 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-win created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-md-win created machinehealthcheck.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-ah9et1-calico created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created clusterresourceset.addons.cluster.x-k8s.io/containerd-logger-clusterctl-upgrade-ah9et1 created configmap/cni-clusterctl-upgrade-ah9et1-calico created configmap/csi-proxy-addon created configmap/containerd-logger-clusterctl-upgrade-ah9et1 created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-kklo8o/clusterctl-upgrade-ah9et1-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane clusterctl-upgrade-kklo8o/clusterctl-upgrade-ah9et1-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Turning the workload cluster into a management cluster with older versions of providers INFO: Downloading clusterctl binary from https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/clusterctl-linux-amd64 �[1mSTEP�[0m: Initializing the workload cluster with older versions of providers INFO: clusterctl init --core cluster-api:v0.3.23 --bootstrap kubeadm:v0.3.23 --control-plane kubeadm:v0.3.23 --infrastructure azure:v0.4.15 INFO: Waiting for provider controllers to be running �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5c4d4c9db4-h49qx, container kube-rbac-proxy INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5c4d4c9db4-h49qx, container manager �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-685446d8d8-4r4mt, container kube-rbac-proxy INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-685446d8d8-4r4mt, container manager �[1mSTEP�[0m: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-7bc9769778-tjcl2, container kube-rbac-proxy INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-7bc9769778-tjcl2, container manager �[1mSTEP�[0m: Waiting for deployment capi-webhook-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capi-controller-manager, pod capi-controller-manager-d98d75d79-767pj, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capi-controller-manager, pod capi-controller-manager-d98d75d79-767pj, container manager �[1mSTEP�[0m: Waiting for deployment capi-webhook-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-7b5976cb87-lgnqq, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-7b5976cb87-lgnqq, container manager �[1mSTEP�[0m: Waiting for deployment capi-webhook-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-5c78576f9c-84cd5, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-5c78576f9c-84cd5, container manager �[1mSTEP�[0m: Waiting for deployment capi-webhook-system/capz-controller-manager to be available INFO: Creating log watcher for controller capi-webhook-system/capz-controller-manager, pod capz-controller-manager-55f9c97c75-kbmfc, container kube-rbac-proxy INFO: Creating log watcher for controller capi-webhook-system/capz-controller-manager, pod capz-controller-manager-55f9c97c75-kbmfc, container manager �[1mSTEP�[0m: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-58d8469fdb-5p84q, container kube-rbac-proxy INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-58d8469fdb-5p84q, container manager �[1mSTEP�[0m: THE MANAGEMENT CLUSTER WITH THE OLDER VERSION OF PROVIDERS IS UP&RUNNING! �[1mSTEP�[0m: Creating a namespace for hosting the clusterctl-upgrade test workload cluster INFO: Creating namespace clusterctl-upgrade INFO: Creating event watcher for namespace "clusterctl-upgrade" �[1mSTEP�[0m: Creating a test workload cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-vl3rki" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: Detect clusterctl version via: clusterctl version INFO: clusterctl config cluster clusterctl-upgrade-vl3rki --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1alpha3-kubeadmcontrolplane?timeout=30s": dial tcp 10.103.103.76:443: connect: connection refused �[1mSTEP�[0m: Deleting all cluster.x-k8s.io/v1alpha3 clusters in namespace clusterctl-upgrade in management cluster clusterctl-upgrade-ah9et1 �[1mSTEP�[0m: Deleting cluster clusterctl-upgrade-vl3rki INFO: Waiting for the Cluster clusterctl-upgrade/clusterctl-upgrade-vl3rki to be deleted �[1mSTEP�[0m: Waiting for cluster clusterctl-upgrade-vl3rki to be deleted �[1mSTEP�[0m: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-ah9et1 I0518 17:29:48.045990 30356 request.go:665] Waited for 1.134813021s due to client-side throttling, not priority and fairness, request: GET:https://clusterctl-upgrade-ah9et1-99e942d.uksouth.cloudapp.azure.com:6443/apis/policy/v1?timeout=32s �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster with dockershim [OPTIONAL] With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [REQUIRED] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL] with a single control plane node and 1 node
... skipping 501 lines ... ✓ Installing CNI 🔌 • Installing StorageClass 💾 ... ✓ Installing StorageClass 💾 INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind755682830 INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743" INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar2120748920/image.tar": unable to read image data: Error response from daemon: reference does not exist INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar361014769/image.tar": unable to read image data: Error response from daemon: reference does not exist INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar1293562109/image.tar": unable to read image data: Error response from daemon: reference does not exist [1mSTEP[0m: Initializing the bootstrap cluster INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure INFO: Waiting for provider controllers to be running [1mSTEP[0m: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6984cdc687-mgcnt, container manager [1mSTEP[0m: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available ... skipping 86 lines ... [1mSTEP[0m: Creating a test workload cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-vl3rki" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: Detect clusterctl version via: clusterctl version INFO: clusterctl config cluster clusterctl-upgrade-vl3rki --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1alpha3-kubeadmcontrolplane?timeout=30s": dial tcp 10.103.103.76:443: connect: connection refused [1mSTEP[0m: Deleting all cluster.x-k8s.io/v1alpha3 clusters in namespace clusterctl-upgrade in management cluster clusterctl-upgrade-ah9et1 [1mSTEP[0m: Deleting cluster clusterctl-upgrade-vl3rki INFO: Waiting for the Cluster clusterctl-upgrade/clusterctl-upgrade-vl3rki to be deleted [1mSTEP[0m: Waiting for cluster clusterctl-upgrade-vl3rki to be deleted [1mSTEP[0m: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-ah9et1 ... skipping 8 lines ... [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:202[0m upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:203[0m [91m[1mShould create a management cluster and then upgrade all the providers [It][0m [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147[0m [91mExpected success, but got an error: <*errors.withStack | 0xc0004ab128>: { error: <*exec.ExitError | 0xc000f9bfc0>{ ProcessState: { pid: 34458, status: 256, rusage: { Utime: {Sec: 0, Usec: 430056}, Stime: {Sec: 0, Usec: 214828}, ... skipping 172 lines ... [1mSTEP[0m: Dumping workload cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 679.609202ms [1mSTEP[0m: Dumping workload cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-zg9d9 [1mSTEP[0m: Collecting events for Pod kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b [1mSTEP[0m: failed to find events of Pod "etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-62946, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-lnl7t [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-62946 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-5sq62 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-mvzlj, container kube-proxy ... skipping 2 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/coredns-558bd4d5db-mkz2p [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-558bd4d5db-v2vvm, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-mvzlj [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-5sq62, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-zg9d9, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/coredns-558bd4d5db-v2vvm [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-lnl7t, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b [1mSTEP[0m: failed to find events of Pod "kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b" [1mSTEP[0m: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName." [1mSTEP[0m: Fetching activity logs took 233.056384ms [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterctl-upgrade-wtrork" namespace [1mSTEP[0m: Deleting cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 [1mSTEP[0m: Deleting cluster clusterctl-upgrade-ryvha1 INFO: Waiting for the Cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 to be deleted [1mSTEP[0m: Waiting for cluster clusterctl-upgrade-ryvha1 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-v2vvm, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-mkz2p, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-mvzlj, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5sq62, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-62946, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-node-lnl7t, container calico-node: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-zg9d9, container calico-kube-controllers: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "clusterctl-upgrade" test spec INFO: Deleting namespace clusterctl-upgrade-wtrork [1mSTEP[0m: Redacting sensitive information from logs [32m• [SLOW TEST:1900.592 seconds][0m ... skipping 9 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mRunning the Cluster API E2E tests [0m[0mAPI Version Upgrade [0m[90mupgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 [0m[91m[1m[It] Should create a management cluster and then upgrade all the providers [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:272[0m [1m[91mRan 2 of 24 Specs in 2256.868 seconds[0m [1m[91mFAIL![0m -- [32m[1m1 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m22 Skipped[0m Ginkgo ran 1 suite in 39m14.148017841s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make[1]: *** [Makefile:634: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:642: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...