Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 42m42s |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\s\[ClusterClass\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:117 Timed out after 1200.002s. Error: Unexpected non-nil/non-zero argument at index 1: <*errors.fundamental>: old nodes remain /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machine_helpers.go:151from junit.e2e_suite.1.xml
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-xkidbw INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-xkidbw" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-jiv3az" using the "upgrades" template (Kubernetes v1.24.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-jiv3az --infrastructure (default) --kubernetes-version v1.24.1 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-jiv3az-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-jiv3az-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-jiv3az-mp-0-config created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-jiv3az created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-jiv3az-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-jiv3az-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az-mpd56 to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az-mpd56 to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready STEP: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist STEP: Checking all the machines controlled by k8s-upgrade-and-conformance-jiv3az-md-0-rzt4x are in the "fd4" failure domain INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.25.0-alpha.0.978+04c6c484633252 STEP: Dumping logs from the "k8s-upgrade-and-conformance-jiv3az" workload cluster Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-md-0-rzt4x-755658dd7b-2dwdk, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-md-0-rzt4x-755658dd7b-nwwhk, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-mpd56-6p4mw, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-mpd56-bz4dz, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine pool k8s-upgrade-and-conformance-jiv3az-mp-0, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-xkidbw" namespace STEP: Deleting cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az STEP: Deleting cluster k8s-upgrade-and-conformance-jiv3az INFO: Waiting for the Cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-jiv3az to be deleted STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec INFO: Deleting namespace k8s-upgrade-and-conformance-xkidbw
Filter through log files | View test history on testgrid
capi-e2e When following the Cluster API quick-start [PR-Blocking] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with ClusterClass [PR-Informing] [ClusterClass] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with IPv6 [IPv6] [PR-Informing] Should create a workload cluster
capi-e2e When following the Cluster API quick-start with Ignition Should create a workload cluster
capi-e2e When testing Cluster API working on self-hosted clusters Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e When testing Cluster API working on self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capi-e2e When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
capi-e2e When testing KCP adoption Should adopt up-to-date control plane Machines without modification
capi-e2e When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capi-e2e When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capi-e2e When testing MachinePools Should successfully create a cluster with machine pool machines
capi-e2e When testing clusterctl upgrades [clusterctl-Upgrade] Should create a management cluster and then upgrade all the providers
capi-e2e When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capi-e2e When testing unhealthy machines remediation Should successfully trigger KCP remediation
capi-e2e When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capi-e2e When upgrading a workload cluster using ClusterClass [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass with RuntimeSDK [PR-Informing] [ClusterClass] Should create and upgrade a workload cluster
capi-e2e When upgrading a workload cluster using ClusterClass with a HA control plane [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
... skipping 749 lines ... + retVal=0 ++ docker images -q kindest/node:v1.25.0-alpha.0.978_04c6c484633252 + [[ '' == '' ]] + echo '+ Pulling kindest/node:v1.25.0-alpha.0.978_04c6c484633252' + Pulling kindest/node:v1.25.0-alpha.0.978_04c6c484633252 + docker pull kindest/node:v1.25.0-alpha.0.978_04c6c484633252 Error response from daemon: manifest for kindest/node:v1.25.0-alpha.0.978_04c6c484633252 not found: manifest unknown: manifest unknown + retVal=1 + [[ 1 != 0 ]] + echo '+ image for Kuberentes v1.25.0-alpha.0.978+04c6c484633252 is not available in docker hub, trying local build' + image for Kuberentes v1.25.0-alpha.0.978+04c6c484633252 is not available in docker hub, trying local build + kind::buildNodeImage v1.25.0-alpha.0.978+04c6c484633252 + local version=v1.25.0-alpha.0.978+04c6c484633252 ... skipping 497 lines ... STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.25.0-alpha.0.978+04c6c484633252 STEP: Dumping logs from the "k8s-upgrade-and-conformance-jiv3az" workload cluster Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-md-0-rzt4x-755658dd7b-2dwdk, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-md-0-rzt4x-755658dd7b-nwwhk, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-mpd56-6p4mw, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine k8s-upgrade-and-conformance-jiv3az-mpd56-bz4dz, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 Failed to get logs for machine pool k8s-upgrade-and-conformance-jiv3az-mp-0, cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az: exit status 2 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-xkidbw" namespace STEP: Deleting cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az STEP: Deleting cluster k8s-upgrade-and-conformance-jiv3az INFO: Waiting for the Cluster k8s-upgrade-and-conformance-xkidbw/k8s-upgrade-and-conformance-jiv3az to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-jiv3az to be deleted STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec ... skipping 4 lines ... When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_test.go:29 Should create and upgrade a workload cluster and eventually run kubetest [It] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:117 Timed out after 1200.002s. Error: Unexpected non-nil/non-zero argument at index 1: <*errors.fundamental>: old nodes remain /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machine_helpers.go:151 Full Stack Trace sigs.k8s.io/cluster-api/test/framework.WaitForControlPlaneMachinesToBeUpgraded({0x24acdc8?, 0xc000620a00}, {{0x7fd28a5fcc00, 0xc0005721c0}, 0xc000710e00, {0xc00005e05e, 0x22}, 0x1}, {0xc0017ba240, 0x2, ...}) ... skipping 29 lines ... testing.tRunner(0xc000580b60, 0x22677f8) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f ------------------------------ STEP: Dumping logs from the bootstrap cluster Failed to get logs for the bootstrap cluster node test-ildj2j-control-plane: exit status 2 STEP: Tearing down the management cluster Summarizing 1 Failure: [Fail] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machine_helpers.go:151 Ran 1 of 21 Specs in 1526.090 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 20 Skipped Ginkgo ran 1 suite in 26m27.850238529s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make: *** [Makefile:129: run] Error 1 make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e' + cleanup ++ pgrep -f 'docker events' + kill 62960 ++ pgrep -f 'ctr -n moby events' + kill 62961 ... skipping 24 lines ...