Recent runs || View in Spyglass
PR | cpanato: Upgrade ginkgo |
Result | FAILURE |
Tests | 2 failed / 6 succeeded |
Started | |
Elapsed | 37m3s |
Revision | 56afee9feae7197c88aab4081a3f0fd9fa386791 |
Refs |
438 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capg\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sKCP\supgrade\sin\sa\sHA\scluster\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 1800.000s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 10:36:57.279from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2 created docluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2-control-plane created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2-control-plane created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2-md-0 created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2-md-0 created configmap/k8s-upgrade-and-conformance-8gcwt2-crs-cni created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2-crs-cni created configmap/k8s-upgrade-and-conformance-8gcwt2-crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gcwt2-crs-ccm created domachinetemplate.infrastructure.cluster.x-k8s.io/cp-k8s-upgrade-and-conformance created domachinetemplate.infrastructure.cluster.x-k8s.io/worker-k8s-upgrade-and-conformance created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 10:05:04.682 < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 10:05:04.682 (0s) > Enter [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 10:05:04.682 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:05:04.682 INFO: Creating namespace k8s-upgrade-and-conformance-crqhk3 INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-crqhk3" < Exit [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 10:05:04.728 (46ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 10:05:04.728 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:119 @ 12/29/22 10:05:04.728 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-8gcwt2" using the "upgrades" template (Kubernetes v1.24.9, 3 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-8gcwt2 --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 3 --worker-machine-count 0 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 12/29/22 10:05:07.203 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-crqhk3/k8s-upgrade-and-conformance-8gcwt2-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 12/29/22 10:06:57.278 Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 10m0.047s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 10m0.001s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 8m7.451s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 10 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 11m0.051s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 11m0.004s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 9m7.455s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 11 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 12m0.054s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 12m0.007s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 10m7.457s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 12 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 13m0.057s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 13m0.01s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 11m7.46s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 13 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 14m0.06s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 14m0.013s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 12m7.464s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 14 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 15m0.063s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 15m0.017s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 13m7.467s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 15 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 16m0.066s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 16m0.019s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 14m7.47s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 16 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 17m0.068s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 17m0.022s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 15m7.472s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 17 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 18m0.07s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 18m0.024s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 16m7.474s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 18 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 19m0.073s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 19m0.027s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 17m7.477s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 19 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 20m0.076s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 20m0.03s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 18m7.48s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 20 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 21m0.078s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 21m0.032s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 19m7.482s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 21 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 22m0.081s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 22m0.035s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 20m7.485s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 22 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 23m0.085s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 23m0.038s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 21m7.489s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 23 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 24m0.087s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 24m0.041s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 22m7.491s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 24 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 25m0.09s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 25m0.044s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 23m7.494s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 25 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 26m0.093s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 26m0.046s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 24m7.497s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 26 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 27m0.097s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 27m0.05s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 25m7.501s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 27 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 28m0.102s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 28m0.056s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 26m7.506s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 28 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 29m0.105s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 29m0.059s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 27m7.509s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 29 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 30m0.108s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 30m0.061s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 28m7.511s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 30 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ Automatically polling progress: Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 31m0.11s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 31m0.064s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 29m7.514s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 158 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc00061dd50, {0x260af10?, 0x389d700}, 0x1, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc00061dd50, {0x260af10, 0x389d700}, {0xc0005fed40, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?, 0xc0003cdc00?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc000138008}, {{0x7fad381774a8?, 0xc00070f030?}, 0xc0000dfd40?}, {0xc000a8c4c0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000936500}, {{0xc000a9a0f0, 0x22}, {0xc0005e221f, 0x31}, {0xc0005e2251, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xa8da0e, 0xc00055c480}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 157 [chan receive, 31 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc000a90540}, {0xc00089c480, {0xc000a9a030, 0x22}, {0xc000753e30, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ [FAILED] Timed out after 1800.000s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 10:36:57.279 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 10:36:57.279 (31m52.55s) > Enter [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 10:36:57.279 STEP: Dumping logs from the "k8s-upgrade-and-conformance-8gcwt2" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:36:57.279 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-crqhk3" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:36:57.279 STEP: Deleting cluster k8s-upgrade-and-conformance-crqhk3/k8s-upgrade-and-conformance-8gcwt2 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:36:57.499 STEP: Deleting cluster k8s-upgrade-and-conformance-8gcwt2 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 10:36:57.519 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-crqhk3/k8s-upgrade-and-conformance-8gcwt2 to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-8gcwt2 to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 10:36:57.532 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:37:07.54 INFO: Deleting namespace k8s-upgrade-and-conformance-crqhk3 < Exit [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 10:37:07.565 (10.286s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 10:37:07.565 STEP: Redacting sensitive information from the logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/common.go:95 @ 12/29/22 10:37:07.565 < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 10:37:08.391 (826ms)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capg\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sworkload\scluster\supgrade\sspec\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 10:37:07.275from junit.e2e_suite.1.xml
cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj created docluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj-control-plane created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj-control-plane created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj-md-0 created domachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj-md-0 created configmap/k8s-upgrade-and-conformance-9r47gj-crs-cni created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj-crs-cni created configmap/k8s-upgrade-and-conformance-9r47gj-crs-ccm created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-9r47gj-crs-ccm created domachinetemplate.infrastructure.cluster.x-k8s.io/cp-k8s-upgrade-and-conformance created domachinetemplate.infrastructure.cluster.x-k8s.io/worker-k8s-upgrade-and-conformance created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 10:05:04.672 < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:34 @ 12/29/22 10:05:04.672 (0s) > Enter [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 10:05:04.672 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:05:04.672 INFO: Creating namespace k8s-upgrade-and-conformance-4fxuqi INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-4fxuqi" < Exit [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:84 @ 12/29/22 10:05:04.698 (26ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 10:05:04.698 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:119 @ 12/29/22 10:05:04.698 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-9r47gj" using the "upgrades" template (Kubernetes v1.24.9, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-9r47gj --infrastructure (default) --kubernetes-version v1.24.9 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 12/29/22 10:05:07.188 INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-4fxuqi/k8s-upgrade-and-conformance-9r47gj-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 12/29/22 10:07:07.273 Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 10m0.027s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 10m0.001s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 7m57.426s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x139) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 10 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 11m0.032s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 11m0.006s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 8m57.431s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x141) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 8 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 8 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 11 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 12m0.037s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 12m0.01s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 9m57.435s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x14d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 9 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 12 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 13m0.041s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 13m0.015s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 10m57.44s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x157) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 10 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 13 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 14m0.046s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 14m0.02s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 11m57.445s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x15f) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xb3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 11 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 14 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 15m0.051s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 15m0.024s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 12m57.449s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x16b) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xb5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 12 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 15 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 16m0.056s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 16m0.03s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 13m57.455s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x176) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xb5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 16 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 13 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 16 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 17m0.061s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 17m0.034s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 14m57.459s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x17f) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 17 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 14 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 17 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 16 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 18m0.066s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 18m0.04s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 15m57.465s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x18a) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 18 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 15 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xab) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 18 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 17 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 19m0.07s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 19m0.044s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 16m57.469s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x19a) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 4 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb5) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 19 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xb2) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 19 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 18 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 20m0.075s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 20m0.049s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 17m57.474s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1b1) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xb9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 20 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xbd) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 20 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 19 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 21m0.079s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 21m0.053s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 18m57.478s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1ba) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xb9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 21 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 21 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 20 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 22m0.084s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 22m0.058s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 19m57.483s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1c8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 22 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 22 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 21 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 23m0.09s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 23m0.063s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 20m57.488s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1d0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 23 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 23 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 22 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 24m0.093s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 24m0.067s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 21m57.492s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1d9) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 24 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 24 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 23 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 25m0.098s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 25m0.072s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 22m57.497s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1e3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 25 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 4 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 25 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 24 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 26m0.102s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 26m0.076s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 23m57.501s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1ee) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 3 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 26 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 26 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 25 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 27m0.107s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 27m0.081s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 24m57.506s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x1f7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 4 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 27 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xc3) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 27 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 26 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 28m0.113s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 28m0.086s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 25m57.511s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x204) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 5 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 28 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xcd) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 28 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 27 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 29m0.118s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 29m0.091s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 26m57.516s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x214) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 6 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 29 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xd8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 29 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 28 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 30m0.122s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 30m0.096s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 27m57.521s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x228) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait, 7 minutes] sync.runtime_notifyListWait(0xc002272dc8, 0xc7) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 30 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xd8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 30 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 29 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 31m0.127s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 31m0.1s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 28m57.525s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x230) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xc8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 31 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait, 2 minutes] sync.runtime_notifyListWait(0xc001231e48, 0xd8) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 31 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 30 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { Automatically polling progress: Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest (Spec Runtime: 32m0.132s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 In [It] (Node Runtime: 32m0.105s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 At [By Step] Waiting for one control plane node to exist (Step Runtime: 29m57.53s) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 Spec Goroutine goroutine 27668 [select] github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc0006e4d20, {0x260af10?, 0x389d700}, 0x1, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:426 github.com/onsi/gomega/internal.(*AsyncAssertion).Should(0xc0006e4d20, {0x260af10, 0x389d700}, {0xc0007d6bd0, 0x1, 0x1}) /home/prow/go/pkg/mod/github.com/onsi/gomega@v1.24.1/internal/async_assertion.go:110 > sigs.k8s.io/cluster-api/test/framework.WaitForOneKubeadmControlPlaneMachineToExist({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?, 0xc000cb0c00?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 | } | return count > 0, nil > }, intervals...).Should(BeTrue(), "No Control Plane machines came into existence. ") | } | > sigs.k8s.io/cluster-api/test/framework.DiscoveryAndWaitForControlPlaneInitialized({0x2619680?, 0xc00005a0a0}, {{0x7f65e40747c0?, 0xc0006e4930?}, 0xc001c9e9c0?}, {0xc0001373a0, 0x2, 0x2}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:249 | | log.Logf("Waiting for the first control plane machine managed by %s to be provisioned", klog.KObj(controlPlane)) > WaitForOneKubeadmControlPlaneMachineToExist(ctx, WaitForOneKubeadmControlPlaneMachineToExistInput{ | Lister: input.Lister, | Cluster: input.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.setDefaults.func1({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:373 | if input.WaitForControlPlaneInitialized == nil { | input.WaitForControlPlaneInitialized = func(ctx context.Context, input ApplyClusterTemplateAndWaitInput, result *ApplyClusterTemplateAndWaitResult) { > result.ControlPlane = framework.DiscoveryAndWaitForControlPlaneInitialized(ctx, framework.DiscoveryAndWaitForControlPlaneInitializedInput{ | Lister: input.ClusterProxy.GetClient(), | Cluster: result.Cluster, > sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x26279a8, 0xc000c03880}, {{0xc002300120, 0x22}, {0xc000414a3f, 0x31}, {0xc000414a71, 0x17}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:334 | | log.Logf("Waiting for control plane to be initialized") > input.WaitForControlPlaneInitialized(ctx, input, result) | | if input.CNIManifestPath != "" { > sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:121 | By("Creating a workload cluster") | > clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{ | ClusterProxy: input.BootstrapClusterProxy, | ConfigCluster: clusterctl.ConfigClusterInput{ github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x139f2a0, 0xc002086370}) /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/node.go:445 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:847 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode /home/prow/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.6.1/internal/suite.go:834 Goroutines of Interest goroutine 27495 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc0022721c8, 0x24b) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc0022721b0, {0xc000236000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc000236000?, 0xc00137c070?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc00137c070}, {0x7f65e40748b8, 0xc002272180}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0xc000c33000?, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020c078, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b8f9f0, {0x7f65e40748b8, 0xc002272180}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0023b1710, 0x28}, {0xc0023b1740, 0x23}, {0xc002487970, 0xb}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27446 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc002272dc8, 0xdc) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc002272db0, {0xc002338000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002338000?, 0xc0007d63b0?, 0xc00063c800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d63b0}, {0x7f65e40748b8, 0xc002272d80}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640bc8, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc0013219f0, {0x7f65e40748b8, 0xc002272d80}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000db70c0, 0x3e}, {0xc000db7100, 0x39}, {0xc001b21aa0, 0x21}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27440 [sync.Cond.Wait, 32 minutes] sync.runtime_notifyListWait(0xc001035e48, 0x0) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001035e30, {0xc00206c000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00206c000?, 0xc001c56030?, 0xc000096800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001c56030}, {0x7f65e40748b8, 0xc001035e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc000640af0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000b919f0, {0x7f65e40748b8, 0xc001035e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27393 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27439 [sync.Cond.Wait] sync.runtime_notifyListWait(0xc001231e48, 0xe4) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc001231e30, {0xc002062000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc002062000?, 0xc001527fe0?, 0xc000500800?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc001527fe0}, {0x7f65e40748b8, 0xc001231e00}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc00020d0c8, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc001b099f0, {0x7f65e40748b8, 0xc001231e00}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc0021c7b00, 0x29}, {0xc0021c7b30, 0x24}, {0xc0022b5370, 0xc}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27480 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27667 [chan receive, 32 minutes] > sigs.k8s.io/cluster-api/test/framework.WatchNamespaceEvents({0x2619648?, 0xc00051e700}, {0xc000173980, {0xc002300060, 0x22}, {0xc00178df50, 0x22}}) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:164 | defer close(stopInformer) | informerFactory.Start(stopInformer) > <-ctx.Done() | stopInformer <- struct{}{} | } > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents.func1() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:191 | go func() { | defer GinkgoRecover() > WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ | ClientSet: input.ClientSet, | Name: namespace.Name, > sigs.k8s.io/cluster-api/test/framework.CreateNamespaceAndWatchEvents /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/namespace_helpers.go:189 | log.Logf("Creating event watcher for namespace %q", input.Name) | watchesCtx, cancelWatches := context.WithCancel(ctx) > go func() { | defer GinkgoRecover() | WatchNamespaceEvents(watchesCtx, WatchNamespaceEventsInput{ goroutine 27476 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { goroutine 27491 [sync.Cond.Wait, 31 minutes] sync.runtime_notifyListWait(0xc00203a4c8, 0x1d) /usr/local/go/src/runtime/sema.go:517 sync.(*Cond).Wait(0x0?) /usr/local/go/src/sync/cond.go:70 golang.org/x/net/http2.(*pipe).Read(0xc00203a4b0, {0xc00231a000, 0x8000, 0x8000}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/pipe.go:76 golang.org/x/net/http2.transportResponseBody.Read({0x10?}, {0xc00231a000?, 0xc0007d67a0?, 0xc00058e000?}) /home/prow/go/pkg/mod/golang.org/x/net@v0.3.1-0.20221206200815-1e63c2f08a10/http2/transport.go:2512 io.copyBuffer({0x25ff940, 0xc0007d67a0}, {0x7f65e40748b8, 0xc00203a480}, {0x0, 0x0, 0x0}) /usr/local/go/src/io/io.go:427 io.Copy(...) /usr/local/go/src/io/io.go:386 os.genericReadFrom(0x0?, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:162 os.(*File).ReadFrom(0xc0004e83b8, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/os/file.go:156 bufio.(*Writer).ReadFrom(0xc000a239f0, {0x7f65e40748b8, 0xc00203a480}) /usr/local/go/src/bufio/bufio.go:784 > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs.func2({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000deec80, 0x3a}, {0xc000deecc0, 0x35}, {0xc000b4f700, 0x1d}, ...}, ...}, ...) /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:186 | out := bufio.NewWriter(f) | defer out.Flush() > _, err = out.ReadFrom(podLogs) | if err != nil && err != io.ErrUnexpectedEOF { | // Failing to stream logs should not cause the test to fail > sigs.k8s.io/cluster-api/test/framework.WatchDeploymentLogs /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:161 | | // Watch each container's logs in a goroutine so we can stream them all concurrently. > go func(pod corev1.Pod, container corev1.Container) { | defer GinkgoRecover() | goroutine 27199 [select] > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics.func3() /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:228 | defer GinkgoRecover() | for { > select { | case <-ctx.Done(): | return > sigs.k8s.io/cluster-api/test/framework.WatchPodMetrics /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/deployment_helpers.go:225 | }, retryableOperationTimeout, retryableOperationInterval).Should(Succeed(), "Failed to list Pods for deployment %s", klog.KObj(input.Deployment)) | > go func() { | defer GinkgoRecover() | for { [FAILED] Timed out after 1800.001s. No Control Plane machines came into existence. Expected <bool>: false to be true In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:154 @ 12/29/22 10:37:07.275 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:118 @ 12/29/22 10:37:07.275 (32m2.576s) > Enter [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 10:37:07.275 STEP: Dumping logs from the "k8s-upgrade-and-conformance-9r47gj" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:37:07.275 STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-4fxuqi" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:37:07.275 STEP: Deleting cluster k8s-upgrade-and-conformance-4fxuqi/k8s-upgrade-and-conformance-9r47gj - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:37:07.508 STEP: Deleting cluster k8s-upgrade-and-conformance-9r47gj - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 10:37:07.527 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-4fxuqi/k8s-upgrade-and-conformance-9r47gj to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-9r47gj to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 12/29/22 10:37:07.536 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/common.go:51 @ 12/29/22 10:37:17.545 INFO: Deleting namespace k8s-upgrade-and-conformance-4fxuqi < Exit [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/e2e/cluster_upgrade.go:242 @ 12/29/22 10:37:17.56 (10.286s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 10:37:17.56 STEP: Redacting sensitive information from the logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/common.go:95 @ 12/29/22 10:37:17.56 < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-digitalocean/test/e2e/capi_test.go:41 @ 12/29/22 10:37:18.371 (810ms)
Filter through log files | View test history on testgrid
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedAfterSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [SynchronizedBeforeSuite]
capg-e2e [It] Conformance Tests Should run conformance tests
capg-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capg-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capg-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capg-e2e [It] Workload cluster creation Creating a highly available control-plane cluster Should create a cluster with 3 control-plane and 2 worker nodes
capg-e2e [It] Workload cluster creation Creating a single control-plane cluster Should create a cluster with 1 worker node and can be scaled