Skip to content

Conversation

@glozow
Copy link
Member

@glozow glozow commented Dec 10, 2025

Followup to #33629.

Fix misleading docs and variable names. Namely, getTransactionAncestry returns the cluster count, not max descendant count of ancestor set (not worth reimplementing as it is merely a heuristic). No behavior changes - I don’t think much needs to be changed for the first release containing cluster mempool.

Current CoinEligibilityFilters enforce a maximum ancestor count (summed across all outputs, potentially overestimating) and max descendant count across ancestors of the output.

Since #33629, these filters have started using cluster count instead of max desc count across ancestors. The change isn’t dangerous, as the cluster count bounds descendant count as well. Currently, the wallet is essentially enforcing a mixture of both limits - this is good while we are transitioning. Note that the cluster count enforced is 25, not 64, since it's grabbing the node's descendant count limit. While it is not an apples-to-apples comparison, a cluster count limit of 25 helps us avoid busting legacy descendant limits (which will be common on the network for a while).

Potential things for the future, out of scope for this PR:

  • When we get rid of the ancestor/descendant config options, getPackageLimits can probably be replaced with hard-coded values.
  • Change the OutputGroups to track the actual cluster count that results from spending these outputs and merging their clusters.
  • Loosen from 25 after that policy is no longer common.
  • Clean up getPackageLimits.

@DrahtBot
Copy link
Contributor

DrahtBot commented Dec 10, 2025

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Code Coverage & Benchmarks

For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/34037.

Reviews

See the guideline for information on the review process.

Type Reviewers
Concept ACK rkrux, ismaelsadeeq

If your review is incorrectly listed, please copy-paste <!--meta-tag:bot-skip--> into the comment that the bot should ignore.

Conflicts

No conflicts as of last run.

LLM Linter (✨ experimental)

Possible places where named args for integral literals may be used (e.g. func(x, /*named_arg=*/0) in C++, and func(x, named_arg=0) in Python):

  • [CoinEligibilityFilter(0, 1, 2)] in src/wallet/spend.cpp

2026-01-12

@DrahtBot
Copy link
Contributor

🚧 At least one of the CI tasks failed.
Task tidy: https://github.com/bitcoin/bitcoin/actions/runs/20084176691/job/57617902247
LLM reason (✨ experimental): clang-tidy errors (bugprone-argument-comment) due to mismatched argument comments for the cluster_count parameter in coinselection tests.

Hints

Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.

@rkrux
Copy link
Contributor

rkrux commented Dec 12, 2025

Since #33639, these filters have started using cluster count instead of max desc count across ancestors.

The linked PR doesn't seem correct to me. Is it supposed to be #33629 instead?

Copy link
Contributor

@rkrux rkrux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK dc5e20c

Nice, this change comes at a right time. Recently, I got confused by getTransaction Ancestry() in a previous PR: #33528 (comment) - not anymore.

I have limited idea of Cluster Mempool currently, thus I have based this review off of the GetTransactionAncestry implementation in src/txmempool.h|cpp.

Comment on lines -408 to +409
size_t ancestors, descendants;
wallet.chain().getTransactionAncestry(wtx.tx->GetHash(), ancestors, descendants);
size_t ancestors, unused_cluster_count;
wallet.chain().getTransactionAncestry(wtx.tx->GetHash(), ancestors, unused_cluster_count);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In dc5e20c "[doc] coin selection filters by max cluster count, not descendant"

This portion seems suitable for the first commit - 9c299f6 "[refactor] rename variable to clarify it is unused and cluster count".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved to the first commit

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In dc5e20c "[doc] coin selection filters by max cluster count, not descendant"

The following remaining places within GroupOutputs seems appropriate to be updated in this commit because getTransactionAncestry and OutputGroup.Insert() accept cluster_count now.

Based on the tone in the PR description, I'm not certain if it was intentional to skip these changes.

diff --git a/src/wallet/spend.cpp b/src/wallet/spend.cpp
index 59c6ab116c..e2e2403a99 100644
--- a/src/wallet/spend.cpp
+++ b/src/wallet/spend.cpp
@@ -583,12 +583,12 @@ FilteredOutputGroups GroupOutputs(const CWallet& wallet,
         for (const auto& [type, outputs] : coins.coins) {
             for (const COutput& output : outputs) {
                 // Get mempool info
-                size_t ancestors, descendants;
-                wallet.chain().getTransactionAncestry(output.outpoint.hash, ancestors, descendants);
+                size_t ancestors, cluster_count;
+                wallet.chain().getTransactionAncestry(output.outpoint.hash, ancestors, cluster_count);
 
                 // Create a new group per output and add it to the all groups vector
                 OutputGroup group(coin_sel_params);
-                group.Insert(std::make_shared<COutput>(output), ancestors, descendants);
+                group.Insert(std::make_shared<COutput>(output), ancestors, cluster_count);
 
                 // Each filter maps to a different set of groups
                 bool accepted = false;
@@ -612,7 +612,7 @@ FilteredOutputGroups GroupOutputs(const CWallet& wallet,
     // OUTPUT_GROUP_MAX_ENTRIES COutputs, a new OutputGroup is added to the end of the vector.
     typedef std::map<std::pair<CScript, OutputType>, std::vector<OutputGroup>> ScriptPubKeyToOutgroup;
     const auto& insert_output = [&](
-            const std::shared_ptr<COutput>& output, OutputType type, size_t ancestors, size_t descendants,
+            const std::shared_ptr<COutput>& output, OutputType type, size_t ancestors, size_t cluster_count,
             ScriptPubKeyToOutgroup& groups_map) {
         std::vector<OutputGroup>& groups = groups_map[std::make_pair(output->txout.scriptPubKey,type)];
 
@@ -633,24 +633,24 @@ FilteredOutputGroups GroupOutputs(const CWallet& wallet,
             group = &groups.back();
         }
 
-        group->Insert(output, ancestors, descendants);
+        group->Insert(output, ancestors, cluster_count);
     };
 
     ScriptPubKeyToOutgroup spk_to_groups_map;
     ScriptPubKeyToOutgroup spk_to_positive_groups_map;
     for (const auto& [type, outs] : coins.coins) {
         for (const COutput& output : outs) {
-            size_t ancestors, descendants;
-            wallet.chain().getTransactionAncestry(output.outpoint.hash, ancestors, descendants);
+            size_t ancestors, cluster_count;
+            wallet.chain().getTransactionAncestry(output.outpoint.hash, ancestors, cluster_count);
 
             const auto& shared_output = std::make_shared<COutput>(output);
             // Filter for positive only before adding the output
             if (output.GetEffectiveValue() > 0) {
-                insert_output(shared_output, type, ancestors, descendants, spk_to_positive_groups_map);
+                insert_output(shared_output, type, ancestors, cluster_count, spk_to_positive_groups_map);
             }
 
             // 'All' groups
-            insert_output(shared_output, type, ancestors, descendants, spk_to_groups_map);
+            insert_output(shared_output, type, ancestors, cluster_count, spk_to_groups_map);
         }
     }
 

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the tone in the PR description, I'm not certain if it was intentional to skip these changes.

I think the tone does not imply your diff should not be updated as well. IIUC the goal is to maintain both the ancestor limit and the new cluster count limit. The descendant limit is heuristic-based and uncertain, so the cluster limit is lowered to 25 to help prevent easily busting the previous heuristic, it very restrictive but thats the point. Later, when the cluster mempool is widely adopted in the network and nodes enforce cluster-count limit instead, the ancestor limit would be dropped, and the cluster count limit bumped to 64.

So yeah, I think these are just leftovers that should also be updated.

I grepped for “descendants” to see if I could find more, but it seems your diff got them all.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, added your diff

@fanquake
Copy link
Member

cc @murchandamus

@ismaelsadeeq
Copy link
Member

Concept ACK

@achow101 achow101 self-requested a review December 23, 2025 21:59
@glozow
Copy link
Member Author

glozow commented Jan 8, 2026

I will circle back soon.

Avoid confusion by clarifying the docs and renaming the variables that
now hold cluster count rather than descendant count. No behavior change.
@glozow
Copy link
Member Author

glozow commented Jan 12, 2026

Ready for review again, thanks @rkrux and @ismaelsadeeq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants