Recent runs || View in Spyglass
PR | cpanato: Upgrade ginkgo |
Result | FAILURE |
Tests | 2 failed / 6 succeeded |
Started | |
Elapsed | 36m54s |
Revision | 56afee9feae7197c88aab4081a3f0fd9fa386791 |
Refs |
438 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capg\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sKCP\supgrade\sin\sa\sHA\scluster\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 1800.000s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 11:45:42.644from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm created docluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm-control-plane created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm-control-plane created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm-md-0 created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm-md-0 created configmap/k8s-upgrade-and-conformance-zlovsm-crs-cni created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm-crs-cni created configmap/k8s-upgrade-and-conformance-zlovsm-crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-zlovsm-crs-ccm created domachinetemplate.infrastructure.cluster.x-k8s.io/cp-k8s-upgrade-and-conformance created domachinetemplate.infrastructure.cluster.x-k8s.io/worker-k8s-upgrade-and-conformance created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 11:13:59.828 < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 11:13:59.828 (0s) > Enter [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 11:13:59.828 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:13:59.828 INFO: Creating namespace k8s-upgrade-and-conformance-jfp0dt INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-jfp0dt" < Exit [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 11:13:59.875 (47ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 11:13:59.875 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:119 @ 12/29/22 11:13:59.875 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-zlovsm" using the "upgrades" template (Kubernetes v1.24.9, 3 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-zlovsm --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 3 --worker-machine-count 0 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 12/29/22 11:14:02.565 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-jfp0dt/k8s-upgrade-and-conformance-zlovsm-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 12/29/22 11:15:42.643 Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 10m0.048s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 10m0.001s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 8m17.233s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 10 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 11m0.051s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 11m0.004s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 9m17.236s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 11 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 12m0.054s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 12m0.007s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 10m17.239s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 12 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 13m0.057s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 13m0.01s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 11m17.242s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 13 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 14m0.059s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 14m0.012s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 12m17.244s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 14 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 15m0.062s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 15m0.014s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 13m17.247s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 15 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 16m0.064s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 16m0.017s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 14m17.25s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 16 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 17m0.066s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 17m0.019s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 15m17.252s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 17 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 18m0.069s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 18m0.022s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 16m17.255s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 18 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 19m0.072s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 19m0.025s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 17m17.257s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 19 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 20m0.075s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 20m0.028s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 18m17.261s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 20 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 21m0.077s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 21m0.03s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 19m17.263s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 21 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 22m0.08s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 22m0.033s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 20m17.266s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 22 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 23m0.083s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 23m0.036s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 21m17.268s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 23 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 24m0.085s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 24m0.038s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 22m17.27s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 24 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 25m0.088s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 25m0.04s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 23m17.273s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 25 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 26m0.09s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 26m0.043s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 24m17.275s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 26 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 27m0.093s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 27m0.046s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 25m17.278s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 27 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 28m0.096s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 28m0.048s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 26m17.281s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 28 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 29m0.098s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 29m0.051s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 27m17.284s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 29 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 30m0.1s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 30m0.053s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 28m17.286s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 30 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 31m0.103s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 31m0.056s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 29m17.289s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 199 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0004a0850, {0x260af10?, 0x389d700}, 0x1, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0004a0850, {0x260af10, 0x389d700}, {0xc000b4cb70, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?, 0xc000669400?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f1b38527de8?, 0xc0004a00e0?}, 0xc000956340?}, {0xc0004f1c80, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000881200}, {{0xc000b27fb0, 0x22}, {0xc0006f621f, 0x31}, {0xc0006f6251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0x0}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 198 [chan receive, 31 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00064a880}, {0xc000a8d800, {0xc000b27ef0, 0x22}, {0xc000b27dd0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ [FAILED] Timed out after 1800.000s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 11:45:42.644 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 11:45:42.644 (31m42.768s) > Enter [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 11:45:42.644 STEP: Dumping logs from the "k8s-upgrade-and-conformance-zlovsm" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:45:42.644 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-jfp0dt" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:45:42.644 STEP: Deleting cluster k8s-upgrade-and-conformance-jfp0dt/k8s-upgrade-and-conformance-zlovsm - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:45:42.869 STEP: Deleting cluster k8s-upgrade-and-conformance-zlovsm - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 11:45:42.886 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-jfp0dt/k8s-upgrade-and-conformance-zlovsm to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-zlovsm to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 11:45:42.9 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:45:52.908 INFO: Deleting namespace k8s-upgrade-and-conformance-jfp0dt < Exit [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 11:45:52.929 (10.285s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 11:45:52.929 STEP: Redacting sensitive information from the logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/common.go:95 @ 12/29/22 11:45:52.929 < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 11:45:53.761 (832ms)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capg\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sworkload\scluster\supgrade\sspec\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 11:45:52.666from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj created docluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj-control-plane created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj-control-plane created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj-md-0 created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj-md-0 created configmap/k8s-upgrade-and-conformance-h8hsmj-crs-cni created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj-crs-cni created configmap/k8s-upgrade-and-conformance-h8hsmj-crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-h8hsmj-crs-ccm created domachinetemplate.infrastructure.cluster.x-k8s.io/cp-k8s-upgrade-and-conformance created domachinetemplate.infrastructure.cluster.x-k8s.io/worker-k8s-upgrade-and-conformance created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 11:13:59.791 < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 11:13:59.792 (0s) > Enter [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 11:13:59.792 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:13:59.792 INFO: Creating namespace k8s-upgrade-and-conformance-e8by05 INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-e8by05" < Exit [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 11:13:59.812 (21ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 11:13:59.812 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:119 @ 12/29/22 11:13:59.813 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-h8hsmj" using the "upgrades" template (Kubernetes v1.24.9, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-h8hsmj --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 12/29/22 11:14:02.586 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-e8by05/k8s-upgrade-and-conformance-h8hsmj-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 12/29/22 11:15:52.664 Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 10m0.022s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 10m0.001s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 8m7.149s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x141) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 10 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 11m0.027s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 11m0.006s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 9m7.154s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x14b) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 8 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 11 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 12m0.031s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 12m0.01s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 10m7.158s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x153) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 12 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 13m0.037s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 13m0.016s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 11m7.164s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 8 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x15c) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 13 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 14m0.042s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 14m0.021s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 12m7.169s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x166) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 14 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 15m0.047s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 15m0.026s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 13m7.174s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x171) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 15 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 16m0.051s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 16m0.03s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 14m7.178s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x17a) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 16 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 16 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 17m0.055s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 17m0.034s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 15m7.182s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x185) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 17 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 17 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 16 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 18m0.059s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 18m0.038s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 16m7.186s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x18f) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xa2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 18 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 18 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 17 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 19m0.065s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 19m0.044s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 17m7.192s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001384ac8, 0xb0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1a0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014027c8, 0xa9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 19 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 19 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 18 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 20m0.069s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 20m0.048s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 18m7.196s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001384ac8, 0xb5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1b3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014027c8, 0xb0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 20 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 20 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 19 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 21m0.073s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 21m0.052s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 19m7.2s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001384ac8, 0xc1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1c1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xb0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 21 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 21 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 20 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 22m0.077s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 22m0.055s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 20m7.204s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001384ac8, 0xc1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1d2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014027c8, 0xba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 22 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 22 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 21 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 23m0.08s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 23m0.059s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 21m7.207s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xc1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1dc) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 23 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 23 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 22 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 24m0.084s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 24m0.063s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 22m7.211s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xc1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1e6) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 24 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 24 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 23 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 25m0.09s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 25m0.068s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 23m7.217s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 4 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xc1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1f1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 4 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 25 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 25 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 24 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 26m0.093s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 26m0.072s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 24m7.22s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xc1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x1fb) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 26 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 26 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 25 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 27m0.097s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 27m0.076s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 25m7.225s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xc1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x204) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 27 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 27 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 26 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 28m0.102s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 28m0.081s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 26m7.229s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001384ac8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x20d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 28 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 28 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 27 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 29m0.108s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 29m0.087s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 27m7.235s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x21e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014027c8, 0xca) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 29 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 29 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 28 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 30m0.113s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 30m0.091s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 28m7.24s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc001384ac8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x22e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0014027c8, 0xca) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 30 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 30 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 29 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 31m0.116s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 31m0.095s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 29m7.243s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27078 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc000282460, {0x260af10?, 0x389d700}, 0x1, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc000282460, {0x260af10, 0x389d700}, {0xc000a8b0b0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?, 0xc000cd9800?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f98d87d4b10?, 0xc0008c9f80?}, 0xc00174c9c0?}, {0xc001bed620, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc0018b1e40}, {{0xc0019c7440, 0x22}, {0xc0007493ff, 0x31}, {0xc000749431, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc001895500}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 26473 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26931 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001384ac8, 0xce) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384ab0, {0xc001884000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc001884000?, 0xc001b04300?, 0xc000216000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04300}, {0x7f98d86289f0, 0xc001384a80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d48, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000fbb9f0, {0x7f98d86289f0, 0xc001384a80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00046fd80, 0x3e}, {0xc00046fdc0, 0x39}, {0xc0019b9560, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26907 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26910 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 26934 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001894348, 0x239) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001894330, {0xc00190e000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00190e000?, 0xc001b04420?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001b04420}, {0x7f98d86289f0, 0xc001894300}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425f40, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f879f0, {0x7f98d86289f0, 0xc001894300}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00218f0b0, 0x28}, {0xc00218f110, 0x23}, {0xc002422910, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26469 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc0014027c8, 0xca) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0014027b0, {0xc0014ae000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0014ae000?, 0xc000c27980?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000c27980}, {0x7f98d86289f0, 0xc001402780}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004ce080, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f859f0, {0x7f98d86289f0, 0xc001402780}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26470 [sync.Cond.Wait, 31 minutes] sync.runtime_notifyListWait(0xc001384948, 0x2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001384930, {0xc0015de000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0015de000?, 0xc001a1b020?, 0xc000500400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001a1b020}, {0x7f98d86289f0, 0xc001384900}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000116d18, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0024f79f0, {0x7f98d86289f0, 0xc001384900}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc00059f8c0, 0x29}, {0xc00059f8f0, 0x24}, {0xc00147e4d0, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 26480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27077 [chan receive, 31 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc001bfa280}, {0xc0019a5980, {0xc0019c7380, 0x22}, {0xc0019c71d0, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 26477 [sync.Cond.Wait, 30 minutes] sync.runtime_notifyListWait(0xc00154f248, 0x1e) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00154f230, {0xc0016f8000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc0016f8000?, 0xc000a8a050?, 0xc000100400?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc000a8a050}, {0x7f98d86289f0, 0xc00154f200}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000425d50, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000f8b9f0, {0x7f98d86289f0, 0xc00154f200}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0009b3fc0, 0x3a}, {0xc00071c080, 0x35}, {0xc0003d56a0, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | [FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 11:45:52.666 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 11:45:52.666 (31m52.853s) > Enter [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 11:45:52.666 STEP: Dumping logs from the "k8s-upgrade-and-conformance-h8hsmj" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:45:52.666 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-e8by05" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:45:52.666 STEP: Deleting cluster k8s-upgrade-and-conformance-e8by05/k8s-upgrade-and-conformance-h8hsmj - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:45:52.893 STEP: Deleting cluster k8s-upgrade-and-conformance-h8hsmj - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 11:45:52.912 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-e8by05/k8s-upgrade-and-conformance-h8hsmj to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-h8hsmj to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 11:45:52.926 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 11:46:02.934 INFO: Deleting namespace k8s-upgrade-and-conformance-e8by05 < Exit [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 11:46:02.956 (10.29s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 11:46:02.956 STEP: Redacting sensitive information from the logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/common.go:95 @ 12/29/22 11:46:02.956 < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 11:46:03.782 (826ms)
Filter through log files | View test history on testgrid
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [It] Conformance Tests Should run conformance tests
capg-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capg-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capg-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capg-e2e [It] Workload cluster creation Creating a highly available control-plane cluster Should create a cluster with 3 control-plane and 2 worker nodes
capg-e2e [It] Workload cluster creation Creating a single control-plane cluster Should create a cluster with 1 worker node and can be scaled