consensus_encoding: match core's range check decoding compact sizes#5897
consensus_encoding: match core's range check decoding compact sizes#5897nyonson wants to merge 4 commits intorust-bitcoin:masterfrom
Conversation
585a9e0 to
1036ea2
Compare
1036ea2 to
207a771
Compare
| /// | ||
| /// This is an anti-DoS limit which won't possibly reject any block, | ||
| /// or part of a block, on the network. | ||
| #[cfg(feature = "alloc")] |
There was a problem hiding this comment.
Im curious why this cfg feature for alloc is needed here. Where is the allocation happening that this needs to be enabled exactly?
There was a problem hiding this comment.
The const is only used when alloc is enabled.
error: constant `MAX_VEC_SIZE` is never used
--> consensus_encoding/src/decode/decoders.rs:27:7
|
27 | const MAX_VEC_SIZE: usize = 4_000_000;
| ^^^^^^^^^^^^
|
= note: `-D dead-code` implied by `-D warnings`
= help: to override `-D warnings` add `#[expect(dead_code)]` or `#[allow(dead_code)]`
There was a problem hiding this comment.
Oh right I see now. Strange my compiler doesn't seem to give me that error. Is this just from a cargo build with the feature gate removed from the const?
There was a problem hiding this comment.
Yep, remove the gate and then build with no features enable. Can use cargo rbmt test -p consensus_encoding if you want the different feature flag sets to be automatically run for you.
|
Currently impl ByteVecDecoder {
/// Constructs a new byte decoder.
pub const fn new() -> Self {
Self {
prefix_decoder: Some(CompactSizeDecoder::new()),
buffer: Vec::new(),
bytes_expected: 0,
bytes_written: 0,
}
} |
Should it? I was debating that one internally. It is allocating just a hunk of bytes, not a vector of things, so I thought it fit the looser |
|
Isn't the use case only ever decode something that goes in a block and the prefix is the length of that. So it is exactly what we introduced the 4MB limit for, right? |
I don't think that's exactly right. Just looking at the spots we already use
I don't have a great grasp on this whole story yet. But looking at the Core code, they appear to take a more two-tiered approach, and this is what I am tacking towards in this change. Core has the 32MB max for compact size in general, but then enforces lower maxes in type-specific scenarios. But for compact block filters for example, it looks like they don't enforce any lower of a bound, so the default 32MB is used (although I think this is separately bound by a max p2p message size, handled at a different layer). So our current approach is actually more conservative. Do we want to try to match Core one for one on this? I am not sure the best way to go yet, but I do think this is a nudge in the right direction. |
|
EDIT: Me and Nick talked out of band, expect a force push that breaks this up to make review easier.
I don't know the answer to that. I thought we were explicitly trying to be more conservative. Correct me if I'm wrong but I was under the impression that the
If you have dug into it this much I'm happy to go with what ever you come up with. We will have to burn a few of @apoelstra's clock cycles too though. |
207a771 to
138a477
Compare
|
138a477: took a more conservative approach and broke down any new defaults. Added a small API addition at the end so we can kick the can on changing any more defaults for now. |
138a477 to
68b651a
Compare
|
68b651a: fix formatting and run the formatter for the workspace. |
|
Thanks for breaking this up, I"ve got way more confidence in my review now. |
| /// limit. | ||
| pub const fn new() -> Self { Self { buf: ArrayVec::new(), limit: MAX_VEC_SIZE } } | ||
| /// The decoded value must not exceed the default 32MB limit and must fit in | ||
| /// a `usize`, otherwise [`end`](Self::end) will return an error. |
There was a problem hiding this comment.
In 989dec3:
This comment seems wrong. If MAX_COMPACT_SIZE doesn't fit in a usize then the code just won't compile.
At some point we can conditonally compile so that this constant is reduced to usize::MAX on 16-bit machines, but I'm fine ignoring this until we figure out how we can actually test on 16-bit machines. But this doccomment we should change.
There was a problem hiding this comment.
Yea you are right, made sense when the default was smaller.
| use encoding::{ | ||
| ArrayDecoder, ArrayEncoder, ByteVecDecoder, BytesEncoder, CompactSizeEncoder, CompactSizeU64Decoder, Decoder2, Decoder4, Encoder2, Encoder4 | ||
| ArrayDecoder, ArrayEncoder, ByteVecDecoder, BytesEncoder, CompactSizeEncoder, | ||
| CompactSizeU64Decoder, Decoder2, Decoder4, Encoder2, Encoder4, |
There was a problem hiding this comment.
In 989dec3:
All the changes in address.rs in this commit appear to be formatting changes.
There was a problem hiding this comment.
Dang my bad, dropped.
68b651a to
098b587
Compare
|
098b587: rebased and folded in doc and formatting fixes. |
|
Can you drop 098b587? It messes up the formatting. |
|
Formatting issues fixed in #5927 |
|
ACK f05289b |
|
@nyonson if you don't rebase and just drop the last commit then my ack should be good. |
098b587 to
f05289b
Compare
|
Abandoned the last format commit |
|
After all that I accidentally acked this trying to ack #5879 ... |
|
f05289b needs rebase |
Bump the max decoding limit on the standard compact size decoder from 4MB to 32MB. This matches the value used in Core as the general anti-DoS setting. All current call sites across the crates are updated to use the old default explicitly. They can be changed to use the new default if it makes sense on a case by case basis. This new default breaks on 16-bit machines since it doesn't fit in a usize. The alternative would be to change the type to a u64 which sucks for everyone else.
These limit overrides are unnecessary because both types, `PrefilledTransaction.idx` and `Offset`, are u16's under the hood. The decoding will fail on too large of values when the type conversion is attempted in `::end()`.
Allow setting a custom limit in case the default 4MiB is too conservative.
f05289b to
02145aa
Compare
|
@apoelstra @tcharding want me to fix up the API files here or would that be kinda confusing |
Andrew's call really but from my perspective a patch at the front of this (same as #5930) would be fine. But Andrew might be happy to merge as is, if my memory is correct his CI doesn't consider the API text files. |
|
02145aa needs rebase |
Updates the default max value limit of the compact size decoder to
MAX_COMPACT_SIZE: usize = 0x0200_0000which matches the Core implementation.I went with a
usizehere to minimize the API change, and also, I thinkusizegenerally makes the most sense with compact sizes. However, this value is larger than would fit on a 16-bit machine...not sure if that is a deal breaker. We already have one const defined which is too big if it matters (decoders::MAX_VECTOR_ALLOCATEin the same module).The existing
MAX_VEC_SIZElimit is moved over decoders and used with the vec decoder. Callsites are generally kept the same, with some new constructors exposed for possible overrides in the future (p2p/psbt).Closes #5752