Recent runs || View in Spyglass
PR | cpanato: Upgrade ginkgo |
Result | FAILURE |
Tests | 2 failed / 6 succeeded |
Started | |
Elapsed | 37m11s |
Revision | 56afee9feae7197c88aab4081a3f0fd9fa386791 |
Refs |
438 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capg\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sKCP\supgrade\sin\sa\sHA\scluster\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 14:18:34.705from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj created docluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj-control-plane created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj-control-plane created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj-md-0 created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj-md-0 created configmap/k8s-upgrade-and-conformance-50mrbj-crs-cni created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj-crs-cni created configmap/k8s-upgrade-and-conformance-50mrbj-crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-50mrbj-crs-ccm created domachinetemplate.infrastructure.cluster.x-k8s.io/cp-k8s-upgrade-and-conformance created domachinetemplate.infrastructure.cluster.x-k8s.io/worker-k8s-upgrade-and-conformance created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 13:46:41.528 < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 13:46:41.529 (0s) > Enter [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 13:46:41.529 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 13:46:41.529 INFO: Creating namespace k8s-upgrade-and-conformance-nol598 INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-nol598" < Exit [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 13:46:41.566 (38ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 13:46:41.566 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:119 @ 12/29/22 13:46:41.566 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-50mrbj" using the "upgrades" template (Kubernetes v1.24.9, 3 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-50mrbj --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 3 --worker-machine-count 0 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 12/29/22 13:46:44.612 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-nol598/k8s-upgrade-and-conformance-50mrbj-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 12/29/22 13:48:34.704 Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 10m0.039s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 10m0.001s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 8m6.863s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 10 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 11m0.041s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 11m0.003s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 9m6.866s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 11 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 12m0.043s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 12m0.006s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 10m6.868s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 12 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 13m0.046s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 13m0.009s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 11m6.871s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 13 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 14m0.049s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 14m0.011s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 12m6.874s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 14 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 15m0.052s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 15m0.015s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 13m6.877s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 15 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 16m0.055s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 16m0.018s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 14m6.88s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 16 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 17m0.059s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 17m0.021s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 15m6.883s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 17 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 18m0.061s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 18m0.023s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 16m6.885s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 18 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 19m0.063s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 19m0.026s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 17m6.888s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 19 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 20m0.065s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 20m0.028s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 18m6.89s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 20 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 21m0.069s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 21m0.031s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 19m6.893s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 21 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 22m0.071s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 22m0.034s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 20m6.896s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 22 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 23m0.075s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 23m0.037s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 21m6.899s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 23 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 24m0.077s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 24m0.04s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 22m6.902s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 24 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 25m0.08s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 25m0.042s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 23m6.904s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 25 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 26m0.083s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 26m0.045s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 24m6.907s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 26 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 27m0.086s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 27m0.048s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 25m6.91s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 27 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 28m0.089s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 28m0.051s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 26m6.913s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 28 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 29m0.092s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 29m0.054s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 27m6.917s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 29 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 30m0.101s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 30m0.063s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 28m6.925s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 30 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 31m0.103s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 31m0.066s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 29m6.928s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 173 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000248850, {0x260af10?, 0x389d700}, 0x1, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000248850, {0x260af10, 0x389d700}, {0xc0001e5d20, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?, 0xc000a09400?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000132008}, {{0x7f05f8187840?, 0xc000248460?}, 0xc0009ce340?}, {0xc0007002a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0008d0840}, {{0xc000b18810, 0x22}, {0xc0001ca21f, 0x31}, {0xc0001ca251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc000543680}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 172 [chan receive, 31 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000ab7940}, {0xc000a54d80, {0xc000b18750, 0x22}, {0xc000a41590, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ [FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 14:18:34.705 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 14:18:34.705 (31m53.139s) > Enter [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 14:18:34.705 STEP: Dumping logs from the "k8s-upgrade-and-conformance-50mrbj" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:34.705 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-nol598" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:34.705 STEP: Deleting cluster k8s-upgrade-and-conformance-nol598/k8s-upgrade-and-conformance-50mrbj - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:34.991 STEP: Deleting cluster k8s-upgrade-and-conformance-50mrbj - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 14:18:35.013 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-nol598/k8s-upgrade-and-conformance-50mrbj to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-50mrbj to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 14:18:35.03 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:45.04 INFO: Deleting namespace k8s-upgrade-and-conformance-nol598 < Exit [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 14:18:45.064 (10.359s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 14:18:45.064 STEP: Redacting sensitive information from the logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/common.go:95 @ 12/29/22 14:18:45.064 < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 14:18:46.008 (945ms)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capg\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sworkload\scluster\supgrade\sspec\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 14:18:34.705from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73 created docluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73-control-plane created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73-control-plane created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73-md-0 created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73-md-0 created configmap/k8s-upgrade-and-conformance-onji73-crs-cni created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73-crs-cni created configmap/k8s-upgrade-and-conformance-onji73-crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-onji73-crs-ccm created domachinetemplate.infrastructure.cluster.x-k8s.io/cp-k8s-upgrade-and-conformance created domachinetemplate.infrastructure.cluster.x-k8s.io/worker-k8s-upgrade-and-conformance created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 13:46:41.507 < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 13:46:41.507 (0s) > Enter [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 13:46:41.507 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 13:46:41.507 INFO: Creating namespace k8s-upgrade-and-conformance-plj78s INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-plj78s" < Exit [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 13:46:41.528 (22ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 13:46:41.528 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:119 @ 12/29/22 13:46:41.528 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-onji73" using the "upgrades" template (Kubernetes v1.24.9, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-onji73 --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 12/29/22 13:46:44.611 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-plj78s/k8s-upgrade-and-conformance-onji73-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 12/29/22 13:48:34.703 Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 10m0.022s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 10m0.001s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 8m6.825s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 10 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x130) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 11m0.028s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 11m0.007s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 9m6.831s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 11 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x139) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 12m0.034s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 12m0.012s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 10m6.837s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 12 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x143) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 13m0.039s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 13m0.017s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 11m6.842s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 8 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 13 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x14b) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 8 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 14m0.043s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 14m0.021s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 12m6.846s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 14 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x155) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 15m0.047s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 15m0.026s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 13m6.851s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 15 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x15d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 16m0.052s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 16m0.031s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 14m6.856s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 16 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x164) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 16 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 17m0.057s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 17m0.035s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 15m6.86s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 17 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 16 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x16b) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 17 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 18m0.062s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 18m0.04s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 16m6.865s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 18 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 17 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x175) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xd3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 18 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 19m0.067s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 19m0.045s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 17m6.87s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xa5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 19 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 18 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x181) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc000b2de48, 0xd8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 19 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 20m0.071s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 20m0.049s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 18m6.874s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001b0c948, 0xaa) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 20 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 19 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x190) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc000b2de48, 0xd8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 20 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 21m0.075s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 21m0.053s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 19m6.878s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001b0c948, 0xac) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 21 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 20 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1a1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc000b2de48, 0xe2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 21 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 22m0.079s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 22m0.058s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 20m6.883s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001b0c948, 0xb9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 22 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 21 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1b3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc000b2de48, 0xe7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 22 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 23m0.084s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 23m0.062s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 21m6.887s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xbb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 23 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 22 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1bb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xe7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 23 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 24m0.087s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 24m0.066s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 22m6.891s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xbb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 24 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 23 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1c1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xe7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 24 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 25m0.092s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 25m0.071s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 23m6.896s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 4 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xbb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 25 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 24 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1ca) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 4 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xe7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 25 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 26m0.096s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 26m0.074s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 24m6.899s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xbb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 26 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 25 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1d1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xe7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 26 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 27m0.101s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 27m0.08s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 25m6.905s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xbb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 27 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 26 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1da) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xe7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 27 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 28m0.107s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 28m0.085s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 26m6.91s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xbb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 28 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 27 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1e9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc000b2de48, 0xed) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 28 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 29m0.111s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 29m0.089s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 27m6.914s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001b0c948, 0xc2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 29 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 28 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x1f8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc000b2de48, 0xf1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 29 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 30m0.115s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 30m0.093s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 28m6.918s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001b0c948, 0xc2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 30 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 29 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x204) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc000b2de48, 0xf1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 30 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 31m0.12s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 31m0.099s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 29m6.923s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 25149 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00069c5b0, {0x260af10?, 0x389d700}, 0x1, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00069c5b0, {0x260af10, 0x389d700}, {0xc0006b5d90, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?, 0xc000e87800?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7ffa359e6b80?, 0xc0004d1e30?}, 0xc001415040?}, {0xc0021cdcc0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc001e83cc0}, {{0xc002018db0, 0x22}, {0xc00130e6ff, 0x31}, {0xc00130e731, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc0014e6300}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 25004 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25042 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc001b0c948, 0xc2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001b0c930, {0xc001cf6000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001cf6000?, 0xc000ebbda0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000ebbda0}, {0x7ffa2cce7200, 0xc001b0c900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e6d0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0010259f0, {0x7ffa2cce7200, 0xc001b0c900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001427d40, 0x3e}, {0xc001427dc0, 0x39}, {0xc0015ea510, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25148 [chan receive, 31 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc0021dac80}, {0xc000dc1380, {0xc002018cf0, 0x22}, {0xc002018ab0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 25033 [sync.Cond.Wait, 30 minutes] sync.runtime_notifyListWait(0xc0023b0dc8, 0x18) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0023b0db0, {0xc001c16000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001c16000?, 0xc001c5b640?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c5b640}, {0x7ffa2cce7200, 0xc0023b0d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000970258, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0005f39f0, {0x7ffa2cce7200, 0xc0023b0d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000b83dc0, 0x3a}, {0xc000b83e00, 0x35}, {0xc000036240, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25017 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 24992 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014e6048, 0x216) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014e6030, {0xc001dce000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001dce000?, 0xc00118beb0?, 0xc000100000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00118beb0}, {0x7ffa2cce7200, 0xc0014e6000}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00212e2f8, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000dad9f0, {0x7ffa2cce7200, 0xc0014e6000}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc001fa9c80, 0x28}, {0xc001fa9ce0, 0x23}, {0xc00115ba70, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 24999 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc000b2de48, 0xf1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc000b2de30, {0xc001ae0000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ae0000?, 0xc001085dc0?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085dc0}, {0x7ffa2cce7200, 0xc000b2de00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00035e608, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0022c39f0, {0x7ffa2cce7200, 0xc000b2de00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25000 [sync.Cond.Wait, 31 minutes] sync.runtime_notifyListWait(0xc0018227c8, 0x1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0018227b0, {0xc001ad4000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001ad4000?, 0xc001085d80?, 0xc000096c00?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001085d80}, {0x7ffa2cce7200, 0xc001822780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000682bb0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a5b9f0, {0x7ffa2cce7200, 0xc001822780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00151c180, 0x29}, {0xc00151c1b0, 0x24}, {0xc0015d4f60, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 25031 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 25038 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { [FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 14:18:34.705 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 14:18:34.705 (31m53.177s) > Enter [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 14:18:34.706 STEP: Dumping logs from the "k8s-upgrade-and-conformance-onji73" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:34.706 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-plj78s" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:34.706 STEP: Deleting cluster k8s-upgrade-and-conformance-plj78s/k8s-upgrade-and-conformance-onji73 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:35.007 STEP: Deleting cluster k8s-upgrade-and-conformance-onji73 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 14:18:35.028 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-plj78s/k8s-upgrade-and-conformance-onji73 to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-onji73 to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 14:18:35.041 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 14:18:45.054 INFO: Deleting namespace k8s-upgrade-and-conformance-plj78s < Exit [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 14:18:45.072 (10.367s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 14:18:45.072 STEP: Redacting sensitive information from the logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/common.go:95 @ 12/29/22 14:18:45.072 < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 14:18:46.076 (1.004s)
Filter through log files | View test history on testgrid
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [It] Conformance Tests Should run conformance tests
capg-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capg-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capg-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capg-e2e [It] Workload cluster creation Creating a highly available control-plane cluster Should create a cluster with 3 control-plane and 2 worker nodes
capg-e2e [It] Workload cluster creation Creating a single control-plane cluster Should create a cluster with 1 worker node and can be scaled