Skip to content

Refactor trait Decodable (sketch)#1019

Closed
dpc wants to merge 1 commit intorust-bitcoin:masterfrom
dpc:issue-997
Closed

Refactor trait Decodable (sketch)#1019
dpc wants to merge 1 commit intorust-bitcoin:masterfrom
dpc:issue-997

Conversation

@dpc
Copy link
Copy Markdown
Contributor

@dpc dpc commented May 28, 2022

Adjust code afterwards. This is to address #997.

Edit: Eh, this is backwards. 🤦 I started with details somewhat different, then reverted that idea and didn't realize the implications until i woke up today.

consensus_decode_from_finite_reader should just forward to consensus_decode because in the general case decoding without finite-len requirement is more general than with it.

Most of the type won't care, because they are finite and non-recursive. Only vecs and some other containers will overwrite consensus_decode_from_finite_reader + consensus_decode .

Yeah. I'll redo later today if I find time.

@dpc
Copy link
Copy Markdown
Contributor Author

dpc commented May 28, 2022

I wanted to check how much effort would implementing #997 (comment) be.

I haven't even reviewed the changes myself yet, just changed the trait and then went one by one and fixed all the errors.

Cons:

  • well, there's quite some of churn involved
  • how this trait works becomes more subtle
    • decodables that need OOM, need to remember about overwritting fn consensus_decode to wrap the reader in a Take (though not much more of a problem than previous checking

Pros:

  • I guess it kind of works

Adjust code afterwards. This is to address #997.

#[inline]
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> {
Self::consensus_decode_from_finite_reader(d.take(MAX_MSG_SIZE as u64))
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, now I see that the previous code was using MAX_VEC_SIZE. I was wondering if messages on the wire bound by the same limit as block.

// Do not allocate upfront more items than if the sequnce of type
// occupied roughly quarter a block. This should never be the case
// for normal data, but even if that's not true - `push` will just
// reallocate. This should prevent even malicious decoding requests
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This justification is kind of a stretch. But it is true, that we can default to something lower now.

/// This function relies on `reader` being bound in amount of data
/// it returns for OOM protection. See [`Decodable::consensus_decode_from_finite_reader`].
fn consensus_decode_bytes_from_finite_reader<D: io::Read>(mut d: D, mut len: usize) -> Result<Vec<u8>, Error> {
// there should never be any byte vectors longer than this,
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this true?

}

#[inline]
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> {
Copy link
Copy Markdown
Contributor Author

@dpc dpc May 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary. Since we're using consensus_decode on all inner-types, they will enforce the limits there.

}

#[inline]
fn consensus_decode<D: io::Read>(d: D) -> Result<Self, encode::Error> {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here? Get rid of this, and use consensus_decode in Self::consensus_decode_from_finite_reader for limit?

@dpc
Copy link
Copy Markdown
Contributor Author

dpc commented May 29, 2022

Replaced by #1023

@dpc dpc closed this May 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant