p2p: Replace per-peer transaction rate-limiting with global rate limits#34628
Open
ajtowns wants to merge 10 commits intobitcoin:masterfrom
Open
p2p: Replace per-peer transaction rate-limiting with global rate limits#34628ajtowns wants to merge 10 commits intobitcoin:masterfrom
ajtowns wants to merge 10 commits intobitcoin:masterfrom
Conversation
Contributor
|
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ReviewsSee the guideline for information on the review process. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
a3d3fe3 to
cbae021
Compare
Contributor
Author
This was referenced Feb 20, 2026
| /** Consume n tokens. Returns false if the balance dropped below m_max_debt. */ | ||
| bool decrement(double n = 1.0) | ||
| { | ||
| m_value -= n; |
There was a problem hiding this comment.
Decrement can still go below m_max_debt before checking is complete.
Contributor
Author
There was a problem hiding this comment.
decrement() can always go below m_max_debt, it only reports when it has done so -- it leaves it up to the caller to not go further into debt.
All callers already have the full transaction available, so just pass that through. This matches the signature for InitiateTxBroadcastPrivate() and prepares for a later commit that needs the full transaction to compute its serialized size for rate limiting.
Per-peer rate limiting introduces storage and compute costs proportional to the number of peers. This has caused severe bugs in the past, and continues to be a risk in the event of periods of extremely high rates of transaction submission. Avoid these problems by always completely emptying the m_tx_inventory_to_send queue when processing it. Note that this increases the potential size of INV messages we send for normal tx relay from ~1000 (limited by INVENTORY_BROADCAST_MAX) to potentially 50000 (limited by MAX_INV_SZ).
Add a method for sorting a batch of transactions (specified as a vector of wtxids) per mempool order, designed for transaction relay.
Change the per-peer tx relay queue from std::set to std::vector. This reduces the memory usage and improves locality, at the cost of not automatically deduping entries.
Now unused; replaced by SortMiningScoreWithTopology.
This is a simple token bucket parameterized on clock type, used in the following commit.
Without the per-peer rate limiting, nodes can act as an amplifier for transaction spam -- receiving many transactions from one node, but relaying each of them to over 100 other nodes. Limit the impact of this by providing a global rate limit. This is implemented using dual token buckets, one that consumes a token for every transaction, and one that consumes a token for every serialized byte. This rate limits both per-tx resource usage (eg INV messages) and overall relay bandwidth. Main bucket parameters: * Count: 14tx/s rate, 420tx (30s) capacity * Size: 12MB/600s rate (4-6 blocks per target block interval), 50MB capacity The size bucket is expected to be large enough to almost never have an impact in normal usage, even during transaction storms, and is primarily intended to mitigate attack-like scenarios. Outbound connections get a separate pair of buckets, with rates boosted by a 2.5x multiplier. This avoids the excessive memory and CPU usage due to the 100x multiplier from the queues being per-peer. Note that this also reduces the size of INV messages we send for general tx relay back to a more reasonable level of under 600 txs in 99.999% of cases.
Adds a debug-only configuration option to set the target transaction/second rate for relay to inbound connections. This is mostly intended to be set to artificially low values to aid in testing behaviour when a backlog occurs, but is also available in case the default 14tx/s target is somehow too low in practice.
Add `tx_send_rate` and `inv_buckets` fields to getnetworkinfo. The latter has `inbound` and `outbound` entries, reporting reports backlog count, count tokens, and size tokens. Useful for monitoring relay behavior.
cbae021 to
4b961e3
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Per-peer
m_tx_inventory_to_sendqueues have CPU and memory costs that scale with both queue size and peer count. Under high transaction volume, this has previously caused severe issues (May 2023 disclosure) and still can cause measurable delays (Feb 2026 Runestone surge, with the msghand thread observed hitting 100% CPU and queue memory reaching ~95MB).This PR replaces the per-peer rate limiting with a global queue using dual token buckets (limiting transaction by both count and serialized size). Transactions that arrive within the bucket capacity still relay nearly immediately, but excess transactions queue in a global backlog and drain as the token buckets refill.
Key parameters:
Per-peer queues are retained solely for privacy batching and are always fully emptied, removing the old
INVENTORY_BROADCAST_MAXcap.This reduces the memory and CPU burden during transaction spikes when the queuing logic is engaged from O(queue * peers) to O(queue), as the queued transactions no longer need to be retained per-peer or re-sorted per-peer.
Design discussion: https://gist.github.com/ajtowns/d61bea974a07190fa6c6c8eaef3638b9