Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 6 succeeded |
Started | |
Elapsed | 48m27s |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\s\[It\]\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\s\[ClusterClass\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 420.341s. Timed out waiting for all MachinePool k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-mp-0 instances to be upgraded to Kubernetes version v1.24.10 Error: function returned error: old version instances remain <*fmt.wrapError | 0xc0008792a0>: { msg: "function returned error: old version instances remain", err: <*errors.fundamental | 0xc0016d1410>{ msg: "old version instances remain", stack: [0x1cf8cfa, 0x4daf25, 0x4da41c, 0x886bda, 0x887b0c, 0x88532d, 0x1cf8b04, 0x1cf7225, 0x1db1c4a, 0x86321b, 0x877318, 0x4704e1], }, } In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 @ 01/30/23 05:40:12.47from junit.e2e_suite.1.xml
clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-dexn3h-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-dmp-0 created Failed to get logs for Machine k8s-upgrade-and-conformance-dexn3h-gncvf-r6ps8, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-dexn3h-md-0-hh74g-5c85d44764-2cqbs, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-dexn3h-md-0-hh74g-5c85d44764-fd98c, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 Failed to get logs for MachinePool k8s-upgrade-and-conformance-dexn3h-mp-0, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 > Enter [BeforeEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:84 @ 01/30/23 05:23:47.999 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 01/30/23 05:23:47.999 INFO: Creating namespace k8s-upgrade-and-conformance-uuxrzg INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-uuxrzg" < Exit [BeforeEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:84 @ 01/30/23 05:23:48.034 (35ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 @ 01/30/23 05:23:48.034 STEP: Creating a workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:119 @ 01/30/23 05:23:48.034 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-dexn3h" using the "upgrades-cgroupfs" template (Kubernetes v1.23.16, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-dexn3h --infrastructure (default) --kubernetes-version v1.23.16 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/cluster_helpers.go:134 @ 01/30/23 05:23:50.436 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-gncvf to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:132 @ 01/30/23 05:24:00.608 INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-gncvf to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:164 @ 01/30/23 05:24:40.67 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:209 @ 01/30/23 05:25:00.709 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinedeployment_helpers.go:102 @ 01/30/23 05:25:00.78 STEP: Checking all the machines controlled by k8s-upgrade-and-conformance-dexn3h-md-0-hh74g are in the "fd4" failure domain - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 01/30/23 05:25:41.169 INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:79 @ 01/30/23 05:25:41.231 STEP: Upgrading the Cluster topology - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:144 @ 01/30/23 05:26:21.265 INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.24.10 - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 01/30/23 05:26:21.31 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/daemonset_helpers.go:41 @ 01/30/23 05:30:51.654 INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/deployment_helpers.go:335 @ 01/30/23 05:31:01.661 INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-md-0-hh74g to be upgraded to v1.24.10 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.10 STEP: Upgrading the machinepool instances - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:203 @ 01/30/23 05:33:11.86 INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-mp-0 to be upgraded from v1.23.16 to v1.24.10 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.24.10 [FAILED] Timed out after 420.341s. Timed out waiting for all MachinePool k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-mp-0 instances to be upgraded to Kubernetes version v1.24.10 Error: function returned error: old version instances remain <*fmt.wrapError | 0xc0008792a0>: { msg: "function returned error: old version instances remain", err: <*errors.fundamental | 0xc0016d1410>{ msg: "old version instances remain", stack: [0x1cf8cfa, 0x4daf25, 0x4da41c, 0x886bda, 0x887b0c, 0x88532d, 0x1cf8b04, 0x1cf7225, 0x1db1c4a, 0x86321b, 0x877318, 0x4704e1], }, } In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 @ 01/30/23 05:40:12.47 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 @ 01/30/23 05:40:12.471 (16m24.437s) > Enter [AfterEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:242 @ 01/30/23 05:40:12.471 STEP: Dumping logs from the "k8s-upgrade-and-conformance-dexn3h" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 01/30/23 05:40:12.471 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-uuxrzg" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 01/30/23 05:40:15.173 STEP: Deleting cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 01/30/23 05:40:15.551 STEP: Deleting cluster k8s-upgrade-and-conformance-dexn3h - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 01/30/23 05:40:15.573 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-dexn3h to be deleted - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 01/30/23 05:40:15.599 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 01/30/23 05:40:35.623 INFO: Deleting namespace k8s-upgrade-and-conformance-uuxrzg < Exit [AfterEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:242 @ 01/30/23 05:40:35.646 (23.175s)
Filter through log files | View test history on testgrid
capi-e2e [SynchronizedAfterSuite]
capi-e2e [SynchronizedAfterSuite]
capi-e2e [SynchronizedAfterSuite]
capi-e2e [SynchronizedBeforeSuite]
capi-e2e [SynchronizedBeforeSuite]
capi-e2e [SynchronizedBeforeSuite]
capi-e2e [It] When following the Cluster API quick-start [PR-Blocking] Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start check owner references are correctly reconciled and rereconciled if deleted Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start with ClusterClass [PR-Informing] [ClusterClass] Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start with ClusterClass check owner references are correctly reconciled and rereconciled if deleted [ClusterClass] Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start with IPv6 [IPv6] [PR-Informing] Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start with Ignition Should create a workload cluster
capi-e2e [It] When testing Cluster API working on self-hosted clusters Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing Cluster API working on self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing Cluster API working on self-hosted clusters using ClusterClass with a HA control plane [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing Cluster API working on single-node self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capi-e2e [It] When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
capi-e2e [It] When testing KCP adoption Should adopt up-to-date control plane Machines without modification
capi-e2e [It] When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capi-e2e [It] When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capi-e2e [It] When testing MachinePools Should successfully create a cluster with machine pool machines
capi-e2e [It] When testing clusterctl upgrades (v0.3=>current) Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades (v0.4=>current) Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades (v1.2=>current) Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades (v1.3=>current) Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.2=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.3=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capi-e2e [It] When testing unhealthy machines remediation Should successfully trigger KCP remediation
capi-e2e [It] When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capi-e2e [It] When upgrading a workload cluster using ClusterClass [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e [It] When upgrading a workload cluster using ClusterClass with RuntimeSDK [PR-Informing] [ClusterClass] Should create, upgrade and delete a workload cluster
capi-e2e [It] When upgrading a workload cluster using ClusterClass with a HA control plane [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e [It] When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
... skipping 620 lines ... Updating kustomize pull policy file for manager resources sed -i'' -e 's@imagePullPolicy: .*@imagePullPolicy: '"IfNotPresent"'@' ./test/extension/config/default/manager_pull_policy.yaml make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api' make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api' + KUBERNETES_VERSION_UPGRADE_TO="stable-1.24" resolved to "v1.24.10" + Pulling kindest/node:v1.24.10 Error response from daemon: manifest for kindest/node:v1.24.10 not found: manifest unknown: manifest unknown + image for Kuberentes v1.24.10 is not available in docker hub, trying local build KUBE_ROOT /home/prow/go/src/k8s.io/kubernetes + Checkout branch for Kubernetes v1.24.10 + checkout tag v1.24.10 Switched to a new branch 'v1.24.10-branch' + Setting version for Kubernetes build to v1.24.10 ... skipping 47 lines ... Finished building Kubernetes Building node image ... Building in container: kind-build-1675055257-34159234 Image "kindest/node:v1.24.10" build completed. + KUBERNETES_VERSION_UPGRADE_FROM="stable-1.23" resolved to "v1.23.16" + Pulling kindest/node:v1.23.16 Error response from daemon: manifest for kindest/node:v1.23.16 not found: manifest unknown: manifest unknown + image for Kuberentes v1.23.16 is not available in docker hub, trying local build KUBE_ROOT /home/prow/go/src/k8s.io/kubernetes + Checkout branch for Kubernetes v1.23.16 + checkout tag v1.23.16 Switched to a new branch 'v1.23.16-branch' + Setting version for Kubernetes build to v1.23.16 ... skipping 165 lines ... /home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build test/e2e/data/infrastructure-docker/main/cluster-template-ignition --load-restrictor LoadRestrictionsNone > test/e2e/data/infrastructure-docker/main/cluster-template-ignition.yaml mkdir -p test/e2e/data/test-extension /home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build test/extension/config/default > test/e2e/data/test-extension/deployment.yaml /home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.7.0 -v --trace -poll-progress-after=60m \ -poll-progress-interval=5m --tags=e2e --focus="\[K8s-Upgrade\]" \ --nodes=3 --timeout=2h --no-color=true \ --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e -- \ -e2e.artifacts-folder="/logs/artifacts" \ -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \ -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false go: downloading k8s.io/api v0.26.1 go: downloading github.com/onsi/gomega v1.25.0 go: downloading github.com/blang/semver v3.5.1+incompatible ... skipping 193 lines ... /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169 ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169 ------------------------------ • [FAILED] [1007.647 seconds] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 Captured StdOut/StdErr Output >> clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created ... skipping 6 lines ... kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-dexn3h-dmp-0 created Failed to get logs for Machine k8s-upgrade-and-conformance-dexn3h-gncvf-r6ps8, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-dexn3h-md-0-hh74g-5c85d44764-2cqbs, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-dexn3h-md-0-hh74g-5c85d44764-fd98c, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 Failed to get logs for MachinePool k8s-upgrade-and-conformance-dexn3h-mp-0, Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h: exit status 2 << Captured StdOut/StdErr Output Timeline >> STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec @ 01/30/23 05:23:47.999 INFO: Creating namespace k8s-upgrade-and-conformance-uuxrzg INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-uuxrzg" ... skipping 28 lines ... INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-md-0-hh74g to be upgraded to v1.24.10 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.10 STEP: Upgrading the machinepool instances @ 01/30/23 05:33:11.86 INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-mp-0 to be upgraded from v1.23.16 to v1.24.10 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.24.10 [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 @ 01/30/23 05:40:12.47 STEP: Dumping logs from the "k8s-upgrade-and-conformance-dexn3h" workload cluster @ 01/30/23 05:40:12.471 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-uuxrzg" namespace @ 01/30/23 05:40:15.173 STEP: Deleting cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h @ 01/30/23 05:40:15.551 STEP: Deleting cluster k8s-upgrade-and-conformance-dexn3h @ 01/30/23 05:40:15.573 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-dexn3h to be deleted @ 01/30/23 05:40:15.599 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec @ 01/30/23 05:40:35.623 INFO: Deleting namespace k8s-upgrade-and-conformance-uuxrzg << Timeline [FAILED] Timed out after 420.341s. Timed out waiting for all MachinePool k8s-upgrade-and-conformance-uuxrzg/k8s-upgrade-and-conformance-dexn3h-mp-0 instances to be upgraded to Kubernetes version v1.24.10 Error: function returned error: old version instances remain <*fmt.wrapError | 0xc0008792a0>: { msg: "function returned error: old version instances remain", err: <*errors.fundamental | 0xc0016d1410>{ msg: "old version instances remain", stack: [0x1cf8cfa, 0x4daf25, 0x4da41c, 0x886bda, 0x887b0c, 0x88532d, 0x1cf8b04, 0x1cf7225, 0x1db1c4a, 0x86321b, 0x877318, 0x4704e1], }, } In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 @ 01/30/23 05:40:12.47 ... skipping 23 lines ... [ReportAfterSuite] PASSED [0.006 seconds] [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo ------------------------------ Summarizing 2 Failures: [FAIL] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 [INTERRUPTED] [SynchronizedAfterSuite] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169 Ran 1 of 30 Specs in 1113.529 seconds FAIL! - Interrupted by Other Ginkgo Process -- 0 Passed | 1 Failed | 0 Pending | 29 Skipped Ginkgo ran 1 suite in 21m3.990064787s Test Suite Failed make: *** [Makefile:780: test-e2e] Error 1 WARNING: No swap limit support ERROR: Found unexpected running containers: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bccd78d86b4c kindest/node:v1.26.0 "/usr/local/bin/entr…" 18 minutes ago Up 17 minutes 127.0.0.1:43227->6443/tcp test-uo6ci6-control-plane + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ... skipping 9 lines ...