-
Notifications
You must be signed in to change notification settings - Fork 38.7k
[WIP] Expand CPFP "carve-out" rule from N=1 to N=100 #18725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Yes, let's do this as soon as possible. But Concept NACK for now. The mempool project (which I've added this to) needs to make more progress before something like this is remotely feasible. Added this to the tracker for that project for when the time comes! |
ariard
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I lean toward Concept NACK too, before to make carve-out more generic to support wider user-cases I think mempool-pinning cases should be more documented to inform better protocol designer doing anything unsafe (even if carve-out isn't directly an issue, who can spend what when and how may be leveraged, see post on RBF-pinning on the ml)
| !m_pool.CalculateMemPoolAncestors(*entry, setAncestors, 2, m_limit_ancestor_size, m_limit_descendants + 1, m_limit_descendant_size + EXTRA_DESCENDANT_TX_SIZE_LIMIT, dummy_err_string)) { | ||
|
|
||
| // Immediately bail from carve-out logic if transaction isn't relatively small | ||
| if (nSize > EXTRA_DESCENDANT_TX_SIZE_LIMIT) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A 1000vbyte size * 100 seems enough to one party owning multiple carve-out outputs to decrease feerate of unconfirmed parent tx enough to stuck it in the mempool. I think that's already a risk today but right now custodial service, being broadcaster, can iterative batch on its own output to reach first DEFAULT_DESCENDANT_LIMIT and avoid any junk-feerate squatting branch.
That said the batching case MAY be far less adversarial that LN one, and worst-case scenario may be just few more blocks to wait instead of a fund loss like in LN.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The batching case, AFAICT is less complicated, and outright theft is not a result if somehow things go wrong. Instead it may simply result in certain processors not batching, or doing smaller batches to avoid pinning hundreds of user payouts, or simply paying a much higher feerate to avoid the situation.
It's a practical problem in real-world deployments today to make sure that the more general case of #15046 can be true. To be clear I find these counter-arguments persuasive:
- We don't have a principled enough view of these changes and it may become a DoS
- Actually this doesn't work for the stated use-case because X
The fact that LN is 1000x more complicated and the original carve-out was insufficient is less persuasive imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's inaccurate with batching because people can pay out to contracts that contain HTLCs (not that they should, but the payments are still timing sensitive potentially).
One space of a solution would be to have a defined eviction order in the transaction; e.g., spends from output S n+1 > output n or outputs are ordered by value spent with order tiebreaker. I think the decision rule should not look at the spending transaction itself (e.g. for feerate) because it introduces a bunch of problems. The issue with a solution like this is it's economically irrational, a miner should take the highest fee paying transaction. So these policies can be a stopgap convenience so that people have a better UX with behaving nodes, but it can't be expected to be followed longer term.
IMO the only real solution is a larger mempool reworking, which decouples accepting and storing a transaction from eviction and from mining more aggressively. That way we can dumbly accept transactions if they connect with no computational risk, evict whenever required, and mine on a best effort understanding of fee rates. The first step in accomplishing this is finishing the MemPool Project first batch of refactors which improve all of our traversal algorithms. Once that is done, we'll have a better understanding of what we can tolerably raise the limits to with the current architecture (including better carve out limits). Following that, some of these bigger picture modularization refactors can be done as needed. There's not a strict ordering between these, but it's easy to change a "pure function" to another equivalent "pure function", and harder to re architect an entire data structure that will have new behaviors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's inaccurate with batching because people can pay out to contracts that contain HTLCs (not that they should, but the payments are still timing sensitive potentially).
fair enough, people may be building really bad ideas on top
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's inaccurate with batching because people can pay out to contracts that contain HTLCs (not that they should, but the payments are still timing sensitive potentially).
That would be a zero-conf channel which is already under a double-spend threat, at least understood by people who implemented it. But yes we shouldn't make it easier to build insecure protocol..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well the issues come up in other contexts too where it's expressly not an issue if it's known the parent cannot be malleated, such as a batch where you are 1 of N parties to have signed the N of N, but the outputs are independent. Perfectly safe to use those HTLCs between any of the N people, but pinning can still be an issue.
|
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
|
🐙 This pull request conflicts with the target branch and needs rebase. |
Chasing concept Acks and discussion. Probably broken. Need p2p wisdom which I lack.
The motivating use-case here is this would allow batched payouts to be safely done for custodial services without them being pinned. Specifically this would allow up to N+1 payments outputs, one of which can be a self-send to the service, which is then spent and RBF'd as needed, similar to the motivating LN use-case given in the code. This could greatly simplify payout logic for services, and allow them to save in fees more aggressively.
Built on #18723