Conversation
sanket1729
left a comment
There was a problem hiding this comment.
Ready for review now
|
CC'ing myself as i'd like to (hopefully) review this soon :) |
125b059 to
887fde3
Compare
src/expression.rs
Outdated
| } | ||
|
|
||
| #[test] | ||
| #[ignore] |
There was a problem hiding this comment.
This #[ignore] can be removed. (The heavy_nest one cannot.)
There was a problem hiding this comment.
Why? are they not the same thing?
There was a problem hiding this comment.
The heavy_nest one stack overflows on my system while this one does not.
I assumed the other stack overflow would be fixed in a future PR.
There was a problem hiding this comment.
Interesting, does it show an overflow error? Or just take a long time because there is no recursion here.
There was a problem hiding this comment.
The reason for #[ignore] is that all ignored tests are run in release mode in CI. In debug mode, the rustc keeps lot of debug information which makes the program really slow to execute.
In the first commit of the PR, both of the tests should pass in a few seconds in release mode. there is no bug fix for a future commit
There was a problem hiding this comment.
Based on a discussion in IRC, I have reverted the commits that switched the implementation back to recursive ones.
| /// Maximum stack + alt stack size during dissat execution | ||
| /// This does **not** include initial witness elements. This element only captures | ||
| /// the additional elements that are pushed during execution. | ||
| pub exec_stack_elem_count_dissat: Option<usize>, |
There was a problem hiding this comment.
Why not include initial witness elements? I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack
There was a problem hiding this comment.
We are already tracking max_elem_stack_count for initial witness elements in a separate field.
I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack.
ors/ threshes can have multiple satisfactions which can different heights.
There was a problem hiding this comment.
Wait, no, I'm still confused .. ors have multiple satisfactions but the max stack depth will just be the maximum of the children. Similar for thresh, you just take the sum of the k maximum-depth subitems.
There was a problem hiding this comment.
but the max stack depth will just be the maximum of the children
This is incorrect.
The stack depth while executing A in and_b(A,B) would be max_exec _A + initial witness A + initial witness B. But for B would max_exec_B + initial_witness_B because A witness would have been consumed
| max_sat_size: self.max_sat_size.map(|(w, s)| (w + 2, s + 1)), | ||
| max_dissat_size: self.max_dissat_size.map(|(w, s)| (w + 1, s + 1)), | ||
| // TODO: fix dissat stack elem counting above in a later commit | ||
| // Technically max(1, self.exec_stack_elem_count_sat), same rationale as cast_dupif |
There was a problem hiding this comment.
I'm curious why you defer this to a later commit. You have cmp::max :)
|
Reviewed 9ae4d7b Mostly good, just a few nits. |
9ae4d7b to
1f0cd5e
Compare
src/expression.rs
Outdated
| } | ||
|
|
||
| #[test] | ||
| #[ignore] |
There was a problem hiding this comment.
Why? are they not the same thing?
| /// Maximum stack + alt stack size during dissat execution | ||
| /// This does **not** include initial witness elements. This element only captures | ||
| /// the additional elements that are pushed during execution. | ||
| pub exec_stack_elem_count_dissat: Option<usize>, |
There was a problem hiding this comment.
We are already tracking max_elem_stack_count for initial witness elements in a separate field.
I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack.
ors/ threshes can have multiple satisfactions which can different heights.
| max_sat_size: self.max_sat_size.map(|(w, s)| (w + 2, s + 1)), | ||
| max_dissat_size: self.max_dissat_size.map(|(w, s)| (w + 1, s + 1)), | ||
| // TODO: fix dissat stack elem counting above in a later commit | ||
| // Technically max(1, self.exec_stack_elem_count_sat), same rationale as cast_dupif |
| ms.ext.exec_stack_elem_count_sat, | ||
| ms.ext.stack_elem_count_sat, | ||
| ) { | ||
| if s + h > MAX_STACK_SIZE { |
There was a problem hiding this comment.
The check with initial elements + exec elements is done here.
There was a problem hiding this comment.
Should we fail if we have one of these properties and not the other?
There was a problem hiding this comment.
None here means that there is no satisfaction for this miniscript. In all cases, if one of them is Some, the other must also be Some
There was a problem hiding this comment.
Maybe should be Some((x,y)) instead of Some(x), Some(y) then? Or assert that !a.xor(b).is_some()?
There was a problem hiding this comment.
I will add the assert for now.
1f0cd5e to
7103b50
Compare
| pub trait ScriptContext: | ||
| fmt::Debug + Clone + Ord + PartialOrd + Eq + PartialEq + hash::Hash + private::Sealed | ||
| where | ||
| Self::Key: MiniscriptKey<Hash = bitcoin::hashes::hash160::Hash>, |
There was a problem hiding this comment.
is there a reason for splitting the trait bound in a where / type clause (below)?
| /// Note that this includes the serialization prefix. Returns | ||
| /// 34/66 for Bare/Legacy based on key compressedness | ||
| /// 34 for Segwitv0, 33 for Tap | ||
| fn pk_len<Pk: MiniscriptKey>(pk: &Pk) -> usize; |
There was a problem hiding this comment.
i think this should not be generic, it should just be pk: &Key?
There was a problem hiding this comment.
This should be generic because we want the ability to compute the length of scripts with Pk is abstract without requiring translating those to consensus keys.
| ) -> Result<(), ScriptContextError> { | ||
| // No fragment is malleable in tapscript context. | ||
| // Certain fragments like Multi are invalid, but are not malleable | ||
| Ok(()) |
There was a problem hiding this comment.
does that include fragments that have a repeated key?
There was a problem hiding this comment.
No, that check is carried out separately in another pass.
| } | ||
|
|
||
| /// Trait for parsing keys from byte slices | ||
| pub trait ParseableKey: Sized + ToPublicKey + private::Sealed { |
There was a problem hiding this comment.
should parseablekey require miniscriptkey?
There was a problem hiding this comment.
I will make it more explicit, but ToPublicKey has MiniscriptKey bound
| }, | ||
| // Note this does not collide with hash32 because they always followed by equal | ||
| // and would be parsed in different branch. If we get a naked Bytes32, it must be | ||
| // a x-only key |
There was a problem hiding this comment.
fwiw this part kinda confuses me & I don't feel certain there's no ambiguity ever.
can this logic be refactored to be more clear to not have a collision or can the documentation of parse be improved (e.g., perhaps in a comment here that gets pr'd separately?)
| non_term.push(NonTerm::Verify); | ||
| term.reduce0(Terminal::Hash160( | ||
| hash160::Hash::from_inner(hash) | ||
| hash160::Hash::from_slice(hash).expect("valid size") |
There was a problem hiding this comment.
Can we use a non panicking form on these?
| PublicKey::from_slice(bytes).map_err(Error::BadPubkey)?, | ||
| )); | ||
| } | ||
| 20 => ret.push(Token::Hash20(&bytes)), |
There was a problem hiding this comment.
see https://github.com/rust-bitcoin/rust-miniscript/pull/255/files#r729970708, you can make these be array refs instead of unsized slices.
There was a problem hiding this comment.
you can do something like:
bytes.try_into::<[u8; 20]>.map(Token::Hash20).map(|v| ret.push(v));
bytes.try_into::<[u8; 32]>.map(Token::Bytes32).map(|v| ret.push(v));
bytes.try_into::<[u8; 33]>.map(Token::Bytes33).map(|v| ret.push(v));
bytes.try_into::<[u8; 65]>.map(Token::Bytes65).map(|v| ret.push(v));
There was a problem hiding this comment.
Will do it once we have 1.34 MSRV.
src/policy/semantic.rs
Outdated
|
|
||
| //! Abstract Policies | ||
|
|
||
| use std::collections::HashSet; |
There was a problem hiding this comment.
prefer using BTreeSet since it gives you reproducible results, although less critical for abstract policies than concrete.
There was a problem hiding this comment.
Sorry, this was an unnecessary change that somehow got into this PR while I was testing bitcoin core merge script.
|
Overall looking pretty fantastic, awesome work @sanket1729. A few minor points of cleanup, but nothing that IMO feels blocking for merge (although would be worth it to fix before release). |
src/miniscript/types/extra_props.rs
Outdated
| debug_assert!(!xor( | ||
| self.stack_elem_count_dissat, | ||
| self.exec_stack_elem_count_dissat | ||
| )); |
There was a problem hiding this comment.
nit: this could probably be more clearly written as
debug_assert_eq!(self.stack_elem_count_stat.is_some(), self.exec_stack_elem_count_sat.is_some());
etc. without the xor helper.
Fixup: remove xor check
|
Added a fixup commit from your ACK that should be easy to review. It removes the xor check and adds some comments about x-only decoding. |
42e4c50 Cleanup context code with SigType enum (sanket1729) e70e6a3 Fix fuzzer crash while allocating keys for multi_a (sanket1729) f6b2536 Add satisfaction tests (sanket1729) 1a7e779 Change satisfier to support xonly sigs (sanket1729) 2c32c03 Use bitcoin::SchnorrSig type from rust-bitcoin (sanket1729) e26d4e6 Use bitcoin::EcdsaSig from rust-bitcoin (sanket1729) 89e7cb3 add multi_a (sanket1729) Pull request description: Adds support for MultiA script fragment. Builds on top of #255 . Do not merge this until we have a rust-bitcoin release ACKs for top commit: apoelstra: ACK 42e4c50 Tree-SHA512: 10701266cf6fe436fe07359e67d16bc925ebbcd767d4e5a8d325db61b979bd76b1e0291dd9eae5b7d58f6a385f8f2e16c2657957a597ce18736382b776e20502
96cb2ecdb8d1511f05e8b51531e8719626f6ef31 Add comment to decoding x-only keys (sanket1729)
498bad2338fa43d2e4384023c28b7e9ec9772953 Address review feedback (sanket1729)
954df3572e1d36a93e884483f09e98b0d0a2186b Check stack execution height in tapscript execution (sanket1729)
95ab99da07d59bf8e2b4577718dcd5baa113e4be Add height check to miniscript types (sanket1729)
3eb63d598f8c0b9ad21d970e6cdc1f5fc487fbaf Add parse/parse_insane for schnorr keys (sanket1729)
888cd62ca6cb39fa2975c939bab31261a6af7076 Add support for parsing Tapscript Miniscripts (sanket1729)
9a5384daee91c87fa60fcd82718a8f641c2426dc Add TapCtx (sanket1729)
4da439f561db9e97e357abb0beb27132a6f8a0bb Fix warnings (sanket1729)
Pull request description:
This is still a draft WIP for taproot PR. There are several API changes and multiple design decisions, and I am open to other possible design opinions. In particular, feedback about changes to `ToPublicKey` and the `ScriptContext` trait is appreciated.
This does **not** include support for P2TR descriptors, multi_a fragment in miniscript.
Still todo:
- [ ] Track height during execution to make sure we don't exceed 1000 elements. This was not a concern previously because it was impossible to hit the limit without exceeding 201 opcodes. But with taproot, this is now possible. See #198 .
- [ ] Implement support for multi_a fragment in Taproot. Depends on support for new opcodes from rust-bitcoin. We can use NOPs meanwhile as this is not a hard blocker.
- [ ] Implement taproot descriptor. This would require some design considerations about the tree representation.
- [ ] Integrate taproot with interpreter and compiler. The interpreter is probably straightforward but carrying around `bitcoin:::PublicKey` and `schnorr::Publickey` would be tricky.
- [ ] Integrate with descriptorPublicKey so that descriptor wallets like bdk can actually use it.
ACKs for top commit:
apoelstra:
ACK 96cb2ecdb8d1511f05e8b51531e8719626f6ef31
Tree-SHA512: 261173c3a6f8c980d204fce1f868dcd17c4219fd12a8e6e197b503efb47ae9b9371d7a9208c43008f1c21db55ffc3f68e6d7fb8badbf2bf397b2970292f8ffe4
This is still a draft WIP for taproot PR. There are several API changes and multiple design decisions, and I am open to other possible design opinions. In particular, feedback about changes to
ToPublicKeyand theScriptContexttrait is appreciated.This does not include support for P2TR descriptors, multi_a fragment in miniscript.
Still todo:
Track height during execution to make sure we don't exceed 1000 elements. This was not a concern previously because it was impossible to hit the limit without exceeding 201 opcodes. But with taproot, this is now possible. See [Reminder issue] Tapscript changes to miniscript #198 .
Implement support for multi_a fragment in Taproot. Depends on support for new opcodes from rust-bitcoin. We can use NOPs meanwhile as this is not a hard blocker.
Implement taproot descriptor. This would require some design considerations about the tree representation.
Integrate taproot with interpreter and compiler. The interpreter is probably straightforward but carrying around
bitcoin:::PublicKeyandschnorr::Publickeywould be tricky.Integrate with descriptorPublicKey so that descriptor wallets like bdk can actually use it.