Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 11m37s |
Revision | release-1.2 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sclusterctl\supgrades\s\[clusterctl\-Upgrade\]\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:163 failed to run clusterctl init: stdout: Fetching providers Error: failed to get provider components for the "kubeadm:v0.3.23" provider: failed to parse yaml: failed to unmarshal the 1st yaml document: "\ufeff<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>ServerBusy</Code><Message>Egress is over the account limit.\nRequestId:a97b0a0f-401e-0035-2e7f-044e71000000\nTime:2022-11-30T05:49:06.0133547Z</Message></Error>\n": error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {} stderr: Unexpected error: <*exec.ExitError | 0xc00060e000>: { ProcessState: { pid: 51537, status: 256, rusage: { Utime: {Sec: 0, Usec: 325032}, Stime: {Sec: 0, Usec: 40503}, Maxrss: 82700, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 3966, Majflt: 0, Nswap: 0, Inblock: 16, Oublock: 0, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 2026, Nivcsw: 72, }, }, Stderr: nil, } exit status 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/clusterctl/client.go:113from junit.e2e_suite.1.xml
STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec INFO: Creating namespace clusterctl-upgrade-s5r9o6 INFO: Creating event watcher for namespace "clusterctl-upgrade-s5r9o6" STEP: Creating a workload cluster to be used as a new management cluster INFO: Creating the workload cluster with name "clusterctl-upgrade-2upkr5" using the "(default)" template (Kubernetes v1.21.12, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster clusterctl-upgrade-2upkr5 --infrastructure (default) --kubernetes-version v1.21.12 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster configmap/cni-clusterctl-upgrade-2upkr5-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/clusterctl-upgrade-2upkr5-crs-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/clusterctl-upgrade-2upkr5-md-0 created cluster.cluster.x-k8s.io/clusterctl-upgrade-2upkr5 created machinedeployment.cluster.x-k8s.io/clusterctl-upgrade-2upkr5-md-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/clusterctl-upgrade-2upkr5-control-plane created dockercluster.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-2upkr5 created dockermachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-2upkr5-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/clusterctl-upgrade-2upkr5-md-0 created INFO: Calling PreWaitForCluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5-control-plane to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready STEP: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist STEP: Checking all the machines controlled by clusterctl-upgrade-2upkr5-md-0 are in the "fd4" failure domain INFO: Waiting for the machine pools to be provisioned STEP: Turning the workload cluster into a management cluster with older versions of providers INFO: Loading image: "gcr.io/k8s-staging-cluster-api/cluster-api-controller-amd64:dev" INFO: Image gcr.io/k8s-staging-cluster-api/cluster-api-controller-amd64:dev is present in local container image cache INFO: Loading image: "gcr.io/k8s-staging-cluster-api/kubeadm-bootstrap-controller-amd64:dev" INFO: Image gcr.io/k8s-staging-cluster-api/kubeadm-bootstrap-controller-amd64:dev is present in local container image cache INFO: Loading image: "gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller-amd64:dev" INFO: Image gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller-amd64:dev is present in local container image cache INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capd-manager-amd64:dev" INFO: Image gcr.io/k8s-staging-cluster-api/capd-manager-amd64:dev is present in local container image cache INFO: Loading image: "gcr.io/k8s-staging-cluster-api/test-extension-amd64:dev" INFO: Image gcr.io/k8s-staging-cluster-api/test-extension-amd64:dev is present in local container image cache INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.9.1" INFO: Image quay.io/jetstack/cert-manager-cainjector:v1.9.1 is present in local container image cache INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.9.1" INFO: Image quay.io/jetstack/cert-manager-webhook:v1.9.1 is present in local container image cache INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.9.1" INFO: Image quay.io/jetstack/cert-manager-controller:v1.9.1 is present in local container image cache INFO: Downloading clusterctl binary from https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.25/clusterctl-linux-amd64 STEP: Initializing the workload cluster with older versions of providers INFO: clusterctl init --core cluster-api:v0.3.23 --bootstrap kubeadm:v0.3.23 --control-plane kubeadm:v0.3.23 --infrastructure docker:v0.3.23 STEP: Dumping logs from the "clusterctl-upgrade-2upkr5" workload cluster Failed to get logs for machine clusterctl-upgrade-2upkr5-control-plane-666vt, cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5: exit status 2 Failed to get logs for machine clusterctl-upgrade-2upkr5-md-0-6dbd79dc55-m6g82, cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5: exit status 2 STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-s5r9o6" namespace STEP: Deleting cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5 STEP: Deleting cluster clusterctl-upgrade-2upkr5 INFO: Waiting for the Cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5 to be deleted STEP: Waiting for cluster clusterctl-upgrade-2upkr5 to be deleted STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec INFO: Deleting namespace clusterctl-upgrade-s5r9o6
Filter through log files | View test history on testgrid
capi-e2e When following the Cluster API quick-start [PR-Blocking] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with ClusterClass [PR-Informing] [ClusterClass] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with IPv6 [IPv6] [PR-Informing] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with Ignition Should create a workload cluster
capi-e2e When testing Cluster API working on self-hosted clusters Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e When testing Cluster API working on self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capi-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
capi-e2e When testing KCP adoption Should adopt up-to-date control plane Machines without modification
capi-e2e When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capi-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capi-e2e When testing MachinePools Should successfully create a cluster with machine pool machines
capi-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capi-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capi-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capi-e2e When upgrading a workload cluster using ClusterClass [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass with RuntimeSDK [PR-Informing] [ClusterClass] Should create, upgrade and delete a workload cluster
capi-e2e When upgrading a workload cluster using ClusterClass with a HA control plane [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
... skipping 1144 lines ... INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.9.1" INFO: Image quay.io/jetstack/cert-manager-controller:v1.9.1 is present in local container image cache INFO: Downloading clusterctl binary from https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.25/clusterctl-linux-amd64 STEP: Initializing the workload cluster with older versions of providers INFO: clusterctl init --core cluster-api:v0.3.23 --bootstrap kubeadm:v0.3.23 --control-plane kubeadm:v0.3.23 --infrastructure docker:v0.3.23 STEP: Dumping logs from the "clusterctl-upgrade-2upkr5" workload cluster Failed to get logs for machine clusterctl-upgrade-2upkr5-control-plane-666vt, cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5: exit status 2 Failed to get logs for machine clusterctl-upgrade-2upkr5-md-0-6dbd79dc55-m6g82, cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5: exit status 2 STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-s5r9o6" namespace STEP: Deleting cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5 STEP: Deleting cluster clusterctl-upgrade-2upkr5 INFO: Waiting for the Cluster clusterctl-upgrade-s5r9o6/clusterctl-upgrade-2upkr5 to be deleted STEP: Waiting for cluster clusterctl-upgrade-2upkr5 to be deleted STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec ... skipping 3 lines ... • Failure [190.094 seconds] When testing clusterctl upgrades [clusterctl-Upgrade] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade_test.go:26 Should create a management cluster and then upgrade all the providers [It] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:163 failed to run clusterctl init: stdout: Fetching providers Error: failed to get provider components for the "kubeadm:v0.3.23" provider: failed to parse yaml: failed to unmarshal the 1st yaml document: "\ufeff<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>ServerBusy</Code><Message>Egress is over the account limit.\nRequestId:a97b0a0f-401e-0035-2e7f-044e71000000\nTime:2022-11-30T05:49:06.0133547Z</Message></Error>\n": error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {} stderr: Unexpected error: <*exec.ExitError | 0xc00060e000>: { ProcessState: { pid: 51537, status: 256, rusage: { Utime: {Sec: 0, Usec: 325032}, ... skipping 55 lines ... testing.tRunner(0xc0006029c0, 0x22dd6d8) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f ------------------------------ STEP: Dumping logs from the bootstrap cluster Failed to get logs for the bootstrap cluster node test-czmep6-control-plane: exit status 2 STEP: Tearing down the management cluster Summarizing 1 Failure: [Fail] When testing clusterctl upgrades [clusterctl-Upgrade] [It] Should create a management cluster and then upgrade all the providers /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/clusterctl/client.go:113 Ran 1 of 21 Specs in 311.570 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 20 Skipped Ginkgo ran 1 suite in 6m17.0120346s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make: *** [Makefile:130: run] Error 1 make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e' + cleanup ++ pgrep -f 'docker events' + kill 25920 ++ pgrep -f 'ctr -n moby events' + kill 25921 ... skipping 21 lines ...