This job view page is being replaced by Spyglass soon. Check out the new job view.
PRlzhecheng: [release-1.3] Support using a customized template outside CAPZ repo
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2022-05-18 17:07
Elapsed43m42s
Revisionceee78752d3cb1e00bdcbeee144862b555f87103
Refs 2310

Test Failures


capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 Should create a management cluster and then upgrade all the providers 11m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sAPI\sVersion\sUpgrade\supgrade\sfrom\sv1alpha3\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha3\s\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
Expected success, but got an error:
    <*errors.withStack | 0xc0004ab128>: {
        error: <*exec.ExitError | 0xc000f9bfc0>{
            ProcessState: {
                pid: 34458,
                status: 256,
                rusage: {
                    Utime: {Sec: 0, Usec: 430056},
                    Stime: {Sec: 0, Usec: 214828},
                    Maxrss: 95636,
                    Ixrss: 0,
                    Idrss: 0,
                    Isrss: 0,
                    Minflt: 12672,
                    Majflt: 0,
                    Nswap: 0,
                    Inblock: 0,
                    Oublock: 25192,
                    Msgsnd: 0,
                    Msgrcv: 0,
                    Nsignals: 0,
                    Nvcsw: 4872,
                    Nivcsw: 403,
                },
            },
            Stderr: nil,
        },
        stack: [0x2539955, 0x2539e7d, 0x26db52c, 0x2c2da0f, 0x15dee9a, 0x15de865, 0x15dd8fb, 0x15e41c9, 0x15e3ba7, 0x15f0f65, 0x15f0c85, 0x15f04c5, 0x15f27f2, 0x15ffd25, 0x15ffb3e, 0x2f913de, 0x1322e82, 0x125fb41],
    }
    exit status 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:272
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 22 Skipped Tests

Error lines from build-log.txt

... skipping 501 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind755682830
INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220518170743"
INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar2120748920/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar361014769/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar1293562109/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6984cdc687-mgcnt, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 86 lines ...
STEP: Creating a test workload cluster
INFO: Creating the workload cluster with name "clusterctl-upgrade-vl3rki" using the "(default)" template (Kubernetes v1.22.9, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: Detect clusterctl version via: clusterctl version
INFO: clusterctl config cluster clusterctl-upgrade-vl3rki --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
INFO: Applying the cluster template yaml to the cluster
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.kubeadmcontrolplane.controlplane.cluster.x-k8s.io": Post "https://capi-kubeadm-control-plane-webhook-service.capi-webhook-system.svc:443/mutate-controlplane-cluster-x-k8s-io-v1alpha3-kubeadmcontrolplane?timeout=30s": dial tcp 10.103.103.76:443: connect: connection refused

STEP: Deleting all cluster.x-k8s.io/v1alpha3 clusters in namespace clusterctl-upgrade in management cluster clusterctl-upgrade-ah9et1
STEP: Deleting cluster clusterctl-upgrade-vl3rki
INFO: Waiting for the Cluster clusterctl-upgrade/clusterctl-upgrade-vl3rki to be deleted
STEP: Waiting for cluster clusterctl-upgrade-vl3rki to be deleted
STEP: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-ah9et1
... skipping 8 lines ...
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:202
    upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:203
      Should create a management cluster and then upgrade all the providers [It]
      /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147

      Expected success, but got an error:
          <*errors.withStack | 0xc0004ab128>: {
              error: <*exec.ExitError | 0xc000f9bfc0>{
                  ProcessState: {
                      pid: 34458,
                      status: 256,
                      rusage: {
                          Utime: {Sec: 0, Usec: 430056},
                          Stime: {Sec: 0, Usec: 214828},
... skipping 172 lines ...
STEP: Dumping workload cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 kube-system pod logs
STEP: Fetching kube-system pod logs took 679.609202ms
STEP: Dumping workload cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container etcd
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-zg9d9
STEP: Collecting events for Pod kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b
STEP: failed to find events of Pod "etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b"
STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-62946, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-lnl7t
STEP: Collecting events for Pod kube-system/calico-node-62946
STEP: Collecting events for Pod kube-system/kube-proxy-5sq62
STEP: Creating log watcher for controller kube-system/kube-proxy-mvzlj, container kube-proxy
... skipping 2 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-controller-manager
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-mkz2p
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-v2vvm, container coredns
STEP: Collecting events for Pod kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b
STEP: Collecting events for Pod kube-system/kube-proxy-mvzlj
STEP: Creating log watcher for controller kube-system/kube-proxy-5sq62, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b"
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-zg9d9, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-v2vvm
STEP: Creating log watcher for controller kube-system/calico-node-lnl7t, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b
STEP: failed to find events of Pod "kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b"
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 233.056384ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-wtrork" namespace
STEP: Deleting cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1
STEP: Deleting cluster clusterctl-upgrade-ryvha1
INFO: Waiting for the Cluster clusterctl-upgrade-wtrork/clusterctl-upgrade-ryvha1 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-ryvha1 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-v2vvm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-mkz2p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mvzlj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5sq62, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-62946, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-clusterctl-upgrade-ryvha1-control-plane-wmc4b, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lnl7t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-zg9d9, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-wtrork
STEP: Redacting sensitive information from logs


• [SLOW TEST:1900.592 seconds]
... skipping 9 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3  [It] Should create a management cluster and then upgrade all the providers 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:272

Ran 2 of 24 Specs in 2256.868 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 22 Skipped


Ginkgo ran 1 suite in 39m14.148017841s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:634: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:642: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...