Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 6 succeeded |
Started | |
Elapsed | 23m57s |
Revision | release-1.3 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\s\[It\]\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\s\[ClusterClass\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 400.085s. Timed out waiting for all MachinePool k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-mp-0 instances to be upgraded to Kubernetes version v1.20.15 Error: function returned error: old version instances remain <*fmt.wrapError | 0xc001a4a2a0>: { msg: "function returned error: old version instances remain", err: <*errors.fundamental | 0xc000c40780>{ msg: "old version instances remain", stack: [0x180919a, 0x4dacc5, 0x4da1bc, 0x883a3a, 0x884942, 0x8822ad, 0x1808fa4, 0x18076c5, 0x1ca190a, 0x861c5b, 0x874ad8, 0x4704c1], }, } In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 @ 11/21/22 14:25:49.703from junit.e2e_suite.1.xml
clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-41hs3g-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-dmp-0 created Failed to get logs for Machine k8s-upgrade-and-conformance-41hs3g-7r8vv-rkcqf, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-41hs3g-md-0-4lvwj-84d4b47cdd-7r2pk, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-41hs3g-md-0-4lvwj-84d4b47cdd-v2mjh, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 Failed to get logs for MachinePool k8s-upgrade-and-conformance-41hs3g-mp-0, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 > Enter [BeforeEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:84 @ 11/21/22 14:10:54.297 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 11/21/22 14:10:54.297 INFO: Creating namespace k8s-upgrade-and-conformance-af003p INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-af003p" < Exit [BeforeEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:84 @ 11/21/22 14:10:54.335 (38ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 @ 11/21/22 14:10:54.335 STEP: Creating a workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:119 @ 11/21/22 14:10:54.335 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-41hs3g" using the "upgrades-cgroupfs" template (Kubernetes v1.19.16, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-41hs3g --infrastructure (default) --kubernetes-version v1.19.16 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/cluster_helpers.go:134 @ 11/21/22 14:10:58.213 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-7r8vv to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:133 @ 11/21/22 14:11:08.54 INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-7r8vv to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:165 @ 11/21/22 14:11:58.638 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:196 @ 11/21/22 14:12:18.666 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinedeployment_helpers.go:102 @ 11/21/22 14:12:18.737 STEP: Checking all the machines controlled by k8s-upgrade-and-conformance-41hs3g-md-0-4lvwj are in the "fd4" failure domain - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 11/21/22 14:12:38.82 INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:79 @ 11/21/22 14:12:38.9 STEP: Upgrading the Cluster topology - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:144 @ 11/21/22 14:13:28.97 INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.20.15 - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 11/21/22 14:13:29.086 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/daemonset_helpers.go:40 @ 11/21/22 14:17:59.449 INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/deployment_helpers.go:276 @ 11/21/22 14:17:59.453 INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-md-0-4lvwj to be upgraded to v1.20.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.20.15 STEP: Upgrading the machinepool instances - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:203 @ 11/21/22 14:19:09.526 INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-mp-0 to be upgraded from v1.19.16 to v1.20.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.20.15 Automatically polling progress: When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 10m0.039s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 10m0s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 At [By Step] Upgrading the machinepool instances (Step Runtime: 1m44.81s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:203 Spec Goroutine goroutine 27093 [select] k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x264ca80, 0xc000136000}, 0xc00269a1e0, 0x119622a?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:660 k8s.io/apimachinery/pkg/util/wait.poll({0x264ca80, 0xc000136000}, 0x48?, 0x1194fe5?, 0x50?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:596 k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x264ca80, 0xc000136000}, 0x263e218?, 0xc001122898?, 0x40fac7?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:528 k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ca4200?, 0x263e218?, 0x38cb278?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:514 sigs.k8s.io/cluster-api/test/framework.getMachinePoolInstanceVersions({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc000426930}, {0xc000df65d0, 0x22}, 0xc00019b500}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:289 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded.func1() /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:250 reflect.Value.call({0x1e70500?, 0xc0003fae40?, 0x60?}, {0x22a847f, 0x4}, {0x38cb278, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 reflect.Value.Call({0x1e70500?, 0xc0003fae40?, 0x1?}, {0x38cb278?, 0x0?, 0xc000212000?}) /usr/local/go/src/reflect/value.go:368 github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3() /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:269 github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004269a0, {0x263e338?, 0xc0020ae2f0}, 0x1, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:428 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004269a0, {0x263e338, 0xc0020ae2f0}, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc00026f880}, {0x7f6cec417e80, 0xc000426930}, 0xc001375ba0, {0xc00005628e, 0x8}, 0x2, ...}, ...) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 sigs.k8s.io/cluster-api/test/framework.UpgradeMachinePoolAndWait({0x264ca48?, 0xc00041c440}, {{0x265afe8, 0xc001991b00}, 0xc001375ba0, {0xc00005628e, 0x8}, {0xc0005c67a0, 0x1, 0x1}, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:176 > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:204 | if len(clusterResources.MachinePools) > 0 && workerMachineCount > 0 { | By("Upgrading the machinepool instances") > framework.UpgradeMachinePoolAndWait(ctx, framework.UpgradeMachinePoolAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | Cluster: clusterResources.Cluster, github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8260e, 0xc001bfc900}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:820 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:807 Automatically polling progress: When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 11m0.043s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 11m0.004s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 At [By Step] Upgrading the machinepool instances (Step Runtime: 2m44.814s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:203 Spec Goroutine goroutine 27093 [select] k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x264ca80, 0xc000136000}, 0xc00269a1e0, 0x119622a?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:660 k8s.io/apimachinery/pkg/util/wait.poll({0x264ca80, 0xc000136000}, 0x48?, 0x1194fe5?, 0x50?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:596 k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x264ca80, 0xc000136000}, 0x263e218?, 0xc001122898?, 0x40fac7?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:528 k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ca4200?, 0x263e218?, 0x38cb278?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:514 sigs.k8s.io/cluster-api/test/framework.getMachinePoolInstanceVersions({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc000426930}, {0xc000df65d0, 0x22}, 0xc00019b500}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:289 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded.func1() /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:250 reflect.Value.call({0x1e70500?, 0xc0003fae40?, 0x60?}, {0x22a847f, 0x4}, {0x38cb278, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 reflect.Value.Call({0x1e70500?, 0xc0003fae40?, 0x1?}, {0x38cb278?, 0x0?, 0xc000212000?}) /usr/local/go/src/reflect/value.go:368 github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3() /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:269 github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004269a0, {0x263e338?, 0xc0020ae2f0}, 0x1, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:428 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004269a0, {0x263e338, 0xc0020ae2f0}, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc00026f880}, {0x7f6cec417e80, 0xc000426930}, 0xc001375ba0, {0xc00005628e, 0x8}, 0x2, ...}, ...) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 sigs.k8s.io/cluster-api/test/framework.UpgradeMachinePoolAndWait({0x264ca48?, 0xc00041c440}, {{0x265afe8, 0xc001991b00}, 0xc001375ba0, {0xc00005628e, 0x8}, {0xc0005c67a0, 0x1, 0x1}, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:176 > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:204 | if len(clusterResources.MachinePools) > 0 && workerMachineCount > 0 { | By("Upgrading the machinepool instances") > framework.UpgradeMachinePoolAndWait(ctx, framework.UpgradeMachinePoolAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | Cluster: clusterResources.Cluster, github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8260e, 0xc001bfc900}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:820 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:807 Automatically polling progress: When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 12m0.046s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 12m0.008s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 At [By Step] Upgrading the machinepool instances (Step Runtime: 3m44.817s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:203 Spec Goroutine goroutine 27093 [select] k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x264ca80, 0xc000136000}, 0xc000a0a0f0, 0x119622a?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:660 k8s.io/apimachinery/pkg/util/wait.poll({0x264ca80, 0xc000136000}, 0x48?, 0x1194fe5?, 0x50?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:596 k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x264ca80, 0xc000136000}, 0x263e218?, 0xc001122898?, 0x40fac7?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:528 k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ca4200?, 0x263e218?, 0x38cb278?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:514 sigs.k8s.io/cluster-api/test/framework.getMachinePoolInstanceVersions({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc000426930}, {0xc000df65d0, 0x22}, 0xc00019b500}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:289 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded.func1() /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:250 reflect.Value.call({0x1e70500?, 0xc0003fae40?, 0x60?}, {0x22a847f, 0x4}, {0x38cb278, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 reflect.Value.Call({0x1e70500?, 0xc0003fae40?, 0x1?}, {0x38cb278?, 0x0?, 0xc000212000?}) /usr/local/go/src/reflect/value.go:368 github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3() /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:269 github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004269a0, {0x263e338?, 0xc0020ae2f0}, 0x1, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:428 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004269a0, {0x263e338, 0xc0020ae2f0}, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc00026f880}, {0x7f6cec417e80, 0xc000426930}, 0xc001375ba0, {0xc00005628e, 0x8}, 0x2, ...}, ...) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 sigs.k8s.io/cluster-api/test/framework.UpgradeMachinePoolAndWait({0x264ca48?, 0xc00041c440}, {{0x265afe8, 0xc001991b00}, 0xc001375ba0, {0xc00005628e, 0x8}, {0xc0005c67a0, 0x1, 0x1}, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:176 > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:204 | if len(clusterResources.MachinePools) > 0 && workerMachineCount > 0 { | By("Upgrading the machinepool instances") > framework.UpgradeMachinePoolAndWait(ctx, framework.UpgradeMachinePoolAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | Cluster: clusterResources.Cluster, github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8260e, 0xc001bfc900}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:820 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:807 Automatically polling progress: When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 13m0.049s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 13m0.011s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 At [By Step] Upgrading the machinepool instances (Step Runtime: 4m44.82s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:203 Spec Goroutine goroutine 27093 [select] k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x264ca80, 0xc000136000}, 0xc000a0a0f0, 0x119622a?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:660 k8s.io/apimachinery/pkg/util/wait.poll({0x264ca80, 0xc000136000}, 0x48?, 0x1194fe5?, 0x50?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:596 k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x264ca80, 0xc000136000}, 0x263e218?, 0xc001122898?, 0x40fac7?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:528 k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ca4200?, 0x263e218?, 0x38cb278?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:514 sigs.k8s.io/cluster-api/test/framework.getMachinePoolInstanceVersions({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc000426930}, {0xc000df65d0, 0x22}, 0xc00019b500}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:289 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded.func1() /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:250 reflect.Value.call({0x1e70500?, 0xc0003fae40?, 0x60?}, {0x22a847f, 0x4}, {0x38cb278, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 reflect.Value.Call({0x1e70500?, 0xc0003fae40?, 0x1?}, {0x38cb278?, 0x0?, 0xc000212000?}) /usr/local/go/src/reflect/value.go:368 github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3() /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:269 github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004269a0, {0x263e338?, 0xc0020ae2f0}, 0x1, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:428 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004269a0, {0x263e338, 0xc0020ae2f0}, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc00026f880}, {0x7f6cec417e80, 0xc000426930}, 0xc001375ba0, {0xc00005628e, 0x8}, 0x2, ...}, ...) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 sigs.k8s.io/cluster-api/test/framework.UpgradeMachinePoolAndWait({0x264ca48?, 0xc00041c440}, {{0x265afe8, 0xc001991b00}, 0xc001375ba0, {0xc00005628e, 0x8}, {0xc0005c67a0, 0x1, 0x1}, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:176 > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:204 | if len(clusterResources.MachinePools) > 0 && workerMachineCount > 0 { | By("Upgrading the machinepool instances") > framework.UpgradeMachinePoolAndWait(ctx, framework.UpgradeMachinePoolAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | Cluster: clusterResources.Cluster, github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8260e, 0xc001bfc900}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:820 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:807 Automatically polling progress: When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 14m0.053s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 14m0.015s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 At [By Step] Upgrading the machinepool instances (Step Runtime: 5m44.824s) /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:203 Spec Goroutine goroutine 27093 [select] k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x264ca80, 0xc000136000}, 0xc000a0a0f0, 0x119622a?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:660 k8s.io/apimachinery/pkg/util/wait.poll({0x264ca80, 0xc000136000}, 0x48?, 0x1194fe5?, 0x50?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:596 k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x264ca80, 0xc000136000}, 0x263e218?, 0xc001122898?, 0x40fac7?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:528 k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ca4200?, 0x263e218?, 0x38cb278?) /home/prow/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:514 sigs.k8s.io/cluster-api/test/framework.getMachinePoolInstanceVersions({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc000426930}, {0xc000df65d0, 0x22}, 0xc00019b500}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:289 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded.func1() /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:250 reflect.Value.call({0x1e70500?, 0xc0003fae40?, 0x60?}, {0x22a847f, 0x4}, {0x38cb278, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 reflect.Value.Call({0x1e70500?, 0xc0003fae40?, 0x1?}, {0x38cb278?, 0x0?, 0xc000212000?}) /usr/local/go/src/reflect/value.go:368 github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3() /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:269 github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004269a0, {0x263e338?, 0xc0020ae2f0}, 0x1, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:428 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004269a0, {0x263e338, 0xc0020ae2f0}, {0xc00096cf30, 0x3, 0x3}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 sigs.k8s.io/cluster-api/test/framework.WaitForMachinePoolInstancesToBeUpgraded({0x264ca48?, 0xc00041c440}, {{0x7f6cec417e80, 0xc00026f880}, {0x7f6cec417e80, 0xc000426930}, 0xc001375ba0, {0xc00005628e, 0x8}, 0x2, ...}, ...) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 sigs.k8s.io/cluster-api/test/framework.UpgradeMachinePoolAndWait({0x264ca48?, 0xc00041c440}, {{0x265afe8, 0xc001991b00}, 0xc001375ba0, {0xc00005628e, 0x8}, {0xc0005c67a0, 0x1, 0x1}, ...}) /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:176 > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:204 | if len(clusterResources.MachinePools) > 0 && workerMachineCount > 0 { | By("Upgrading the machinepool instances") > framework.UpgradeMachinePoolAndWait(ctx, framework.UpgradeMachinePoolAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | Cluster: clusterResources.Cluster, github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8260e, 0xc001bfc900}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:820 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:807 [FAILED] Timed out after 400.085s. Timed out waiting for all MachinePool k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-mp-0 instances to be upgraded to Kubernetes version v1.20.15 Error: function returned error: old version instances remain <*fmt.wrapError | 0xc001a4a2a0>: { msg: "function returned error: old version instances remain", err: <*errors.fundamental | 0xc000c40780>{ msg: "old version instances remain", stack: [0x180919a, 0x4dacc5, 0x4da1bc, 0x883a3a, 0x884942, 0x8822ad, 0x1808fa4, 0x18076c5, 0x1ca190a, 0x861c5b, 0x874ad8, 0x4704c1], }, } In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 @ 11/21/22 14:25:49.703 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 @ 11/21/22 14:25:49.703 (14m55.368s) > Enter [AfterEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:242 @ 11/21/22 14:25:49.703 STEP: Dumping logs from the "k8s-upgrade-and-conformance-41hs3g" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 11/21/22 14:25:49.703 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-af003p" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 11/21/22 14:25:55.455 STEP: Deleting cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 11/21/22 14:25:56.12 STEP: Deleting cluster k8s-upgrade-and-conformance-41hs3g - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 11/21/22 14:25:56.153 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-41hs3g to be deleted - /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/ginkgoextensions/output.go:35 @ 11/21/22 14:25:56.177 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/common.go:51 @ 11/21/22 14:26:06.196 INFO: Deleting namespace k8s-upgrade-and-conformance-af003p < Exit [AfterEach] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:242 @ 11/21/22 14:26:06.231 (16.528s)
Filter through log files | View test history on testgrid
capi-e2e [SynchronizedAfterSuite]
capi-e2e [SynchronizedAfterSuite]
capi-e2e [SynchronizedAfterSuite]
capi-e2e [SynchronizedBeforeSuite]
capi-e2e [SynchronizedBeforeSuite]
capi-e2e [SynchronizedBeforeSuite]
capi-e2e [It] When following the Cluster API quick-start [PR-Blocking] Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start with ClusterClass [PR-Informing] [ClusterClass] Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start with IPv6 [IPv6] [PR-Informing] Should create a workload cluster
capi-e2e [It] When following the Cluster API quick-start with Ignition Should create a workload cluster
capi-e2e [It] When testing Cluster API working on self-hosted clusters Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing Cluster API working on self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing Cluster API working on self-hosted clusters using ClusterClass with a HA control plane [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing Cluster API working on single-node self-hosted clusters using ClusterClass [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capi-e2e [It] When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capi-e2e [It] When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
capi-e2e [It] When testing KCP adoption Should adopt up-to-date control plane Machines without modification
capi-e2e [It] When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capi-e2e [It] When testing MachineDeployment scale out/in Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capi-e2e [It] When testing MachinePools Should successfully create a cluster with machine pool machines
capi-e2e [It] When testing clusterctl upgrades (v0.3=>current) Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades (v0.4=>current) Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades (v1.2=>current) Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing clusterctl upgrades using ClusterClass (v1.2=>current) [ClusterClass] Should create a management cluster and then upgrade all the providers
capi-e2e [It] When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
capi-e2e [It] When testing unhealthy machines remediation Should successfully trigger KCP remediation
capi-e2e [It] When testing unhealthy machines remediation Should successfully trigger machine deployment remediation
capi-e2e [It] When upgrading a workload cluster using ClusterClass [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e [It] When upgrading a workload cluster using ClusterClass with RuntimeSDK [PR-Informing] [ClusterClass] Should create, upgrade and delete a workload cluster
capi-e2e [It] When upgrading a workload cluster using ClusterClass with a HA control plane [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capi-e2e [It] When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
... skipping 777 lines ... /home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build test/e2e/data/infrastructure-docker/v1beta1/main/cluster-template-ignition --load-restrictor LoadRestrictionsNone > test/e2e/data/infrastructure-docker/v1beta1/main/cluster-template-ignition.yaml mkdir -p test/e2e/data/test-extension /home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build test/extension/config/default > test/e2e/data/test-extension/deployment.yaml /home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.4.0 -v --trace -poll-progress-after=10m \ -poll-progress-interval=1m --tags=e2e --focus="\[K8s-Upgrade\]" \ --nodes=3 --timeout=2h --no-color=true \ --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e -- \ -e2e.artifacts-folder="/logs/artifacts" \ -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \ -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false [38;5;9m[1mGinkgo detected a version mismatch between the Ginkgo CLI and the version of Ginkgo imported by your packages:[0m Ginkgo CLI Version: [1m2.4.0[0m ... skipping 545 lines ... /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.5.0/internal/suite.go:807 ------------------------------ When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 ------------------------------ • [FAILED] [911.934 seconds] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_test.go:29 [It] Should create and upgrade a workload cluster and eventually run kubetest /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118 Begin Captured StdOut/StdErr Output >> ... skipping 8 lines ... kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-41hs3g-dmp-0 created Failed to get logs for Machine k8s-upgrade-and-conformance-41hs3g-7r8vv-rkcqf, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-41hs3g-md-0-4lvwj-84d4b47cdd-7r2pk, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 Failed to get logs for Machine k8s-upgrade-and-conformance-41hs3g-md-0-4lvwj-84d4b47cdd-v2mjh, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 Failed to get logs for MachinePool k8s-upgrade-and-conformance-41hs3g-mp-0, Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g: exit status 2 << End Captured StdOut/StdErr Output Begin Captured GinkgoWriter Output >> INFO: Creating namespace k8s-upgrade-and-conformance-af003p INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-af003p" INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-41hs3g" using the "upgrades-cgroupfs" template (Kubernetes v1.19.16, 1 control-plane machines, 2 worker machines) ... skipping 20 lines ... INFO: Waiting for the Cluster k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g to be deleted INFO: Deleting namespace k8s-upgrade-and-conformance-af003p << End Captured GinkgoWriter Output Timed out after 400.085s. Timed out waiting for all MachinePool k8s-upgrade-and-conformance-af003p/k8s-upgrade-and-conformance-41hs3g-mp-0 instances to be upgraded to Kubernetes version v1.20.15 Error: function returned error: old version instances remain <*fmt.wrapError | 0xc001a4a2a0>: { msg: "function returned error: old version instances remain", err: <*errors.fundamental | 0xc000c40780>{ msg: "old version instances remain", stack: [0x180919a, 0x4dacc5, 0x4da1bc, 0x883a3a, 0x884942, 0x8822ad, 0x1808fa4, 0x18076c5, 0x1ca190a, 0x861c5b, 0x874ad8, 0x4704c1], }, } In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 ... skipping 25 lines ... [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo ------------------------------ Summarizing 2 Failures: [FAIL] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinepool_helpers.go:268 [INTERRUPTED] [SynchronizedAfterSuite] /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169 Ran 1 of 26 Specs in 1029.011 seconds FAIL! - Interrupted by Other Ginkgo Process -- 0 Passed | 1 Failed | 0 Pending | 25 Skipped [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.5.0[0m --- FAIL: TestE2E (1029.02s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11m--ginkgo.slow-spec-threshold is deprecated --slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m ... skipping 10 lines ... PASS Ginkgo ran 1 suite in 18m20.955770835s Test Suite Failed make: *** [Makefile:776: test-e2e] Error 1 WARNING: No swap limit support + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker ... skipping 5 lines ...