-
Notifications
You must be signed in to change notification settings - Fork 38.7k
Introduce assumevalid setting to skip validation presumed valid scripts. #9484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
par=4 chainstate reindex with dbcache 2000 goes from 6.15 hours to 2.96 hours. @petertodd I know you had a reservation about keeping the default in chainparams but we also have a number of other similar heuristics there (such as bip34 height, total work in the best chain, various flags). I'd suggest in the future we should move those more policy like things into their own section of chain params. I'd rather do that as a separate refactor later than spread this setting out away from the other ones that need to get changed as part of the review process. |
TheBlueMatt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK. I like this approach.
doc/release-process.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume when you saiy "set the value two blocks back" you mean "set the value two blocks back from the tip at around the time rcs begin, so that it is a month back when release actually happens"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't intend for it to be a month back at the release. There is no reason to set it back at all except that if the block falls out of the chain the speedup will be lost. Another possiblity would be to advise setting it at the tip with a note that it must be fixed if the block falls out of the chain.
I also don't see a reason that this couldn't be updated at the last RC or even between an RC and a release. In general we should set it as far forward as possible... there is no benefit in setting it back, and the more forward it is, the longer it takes for the cached value to rot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. Fair enough, though I disagree that it should be allowed to change pre-release post-rc...for a parameter like this we really want lots of eyes and a few days of letting people reindex prior to release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no reason to set it back at all except that if the block falls out of the chain the speedup will be lost.
Note we could make it smart enough to see the common parent block, but I'm not sure it's worth the additional effort.
I also don't see a reason that this couldn't be updated at the last RC or even between an RC and a release. In general we should set it as far forward as possible...
If we're setting it last minute, I do think more than 2 blocks back would be best. Maybe 10.
src/validation.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any meaningful difference here between a month and a week? (let the bikeshedding begin!).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. I picked a week here because (1) that is the minimum time in our process between an RC and a release and (2) I think two days is likely sufficient for the purpose in the comment-- enough time for a widespread public response.
But if people preferred a month, that would be okay too. I was half expecting this test to get bikesheded out rather than cranked up.
One downside of a longer value is that users on really slow devices who might put a very current value in from another node of theirs would get a diminished synctime improvement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hum, at the risk of overdesign, I'd be ok with only a week or so if the user explicitly specified a trust anchor, but for default-release, I'm afraid a week isnt all that long in the context of relying on widespread public outcry to prevent failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're concerned about it with respect to the default? I don't think that makes sense: It's correct or it isn't and stepping a few blocks back there doesn't make it more correct. If not for the extra complexity and the benefit that block validation gets better tested in the RC cycle I would have exempted the default from this test.
If the review process won't catch an invalid block in the chain-- a simple comparison with a constant-- how is it going to catch the invalidity being simply made valid by indirect and subtle changes to the consensus behavior?
Petertodd's argument for this approach was that the review effort for these values is trivial compared to any of the rest of the software, making it safe and easy to update them with a minimum of additional scrutiny.
Note we could make it smart enough to see the common parent block, but I'm not sure it's worth the additional effort.
We could ship a tool that performed the entire set of updates for a release, it would be pretty easy to have such a tool use chain-tips to go back further if there is a competing fork. Similarly, the tool could also be used to check, like we have for signed commits. (though if we do that I think it should be a separate PR.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally don't like this part. I think it's simpler to just "artificially set the default assumed verified block further back", don't see that as a big deal. But no strong opinion, if that's a concern, the solution seems good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Micro-nit: regardless of the value chosen, 60 * 60 * 24 * 7 could use a const or #define.
That's not the worst-case scenario. They could set it to a block with invalid scripts in the history, which would break from consensus. I do think this is an acceptable risk, but maybe should get at least some warning. |
|
@luke-jr yes but only if it is buried by a week (in current code), which is the reason for that additional test. The logprint that it's assuming valid was my warning.. but if you'd like to suggest something more... |
src/init.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to result in an out-of-bounds memory access when -assumevalid= (with no value) is supplied. uint256S(..) invokes base_blob::SetHex(const char *), which assumes a supplied length of at least 1. An out-of-bounds access then occurs at src/uint256.cpp:39.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GetArg returns a std:string, base_blob::SetHex(const std::string& str) invokes c_str() on the pinput and calls the c_str argumented version (which is null terminated). On a zero length input the input array with be pointer to a null byte. This will fail the psz[0] == '0' (note the quotes). No out of bounds access will occur on line 39 (nor below). What am I misreading?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh of course, the short-circuit eval there slipped my mind. Never mind, all good!
doc/release-notes.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
knowledge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
src/init.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we use _text_ anywhere else in the help output? It seems a bit strange.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed.
src/validation.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems weird. You're checking
- Whether the to-be-connected block is an ancestor of the assumed-valid block.
- Whether a specific headers chain is at least a week ahead of the to-be-connected block.
There is no guarantee however that pindexBestHeader descends from the to-be-connected block, so what is the relevance of it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, fixed, I also require the block to be an ancestor of the BestHeader.
src/validation.h
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whose
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
doc/release-notes.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
src/chainparams.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe list the height/time of that block in a comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added height. I'm trying to not make the update procedure too much work.
|
utACK 739983db5dd799424738bc752eee7721fc94f1fb |
|
utACK 739983db5dd799424738bc752eee7721fc94f1fb. I like the approach as well. |
|
Looks good to me. I'm going to try to write a test case. One comment: please remove or update the comment in |
src/consensus/params.h
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be in CChainParams instead of Consensus::Params, which is the consensus "section of chainparams".
doc/release-notes.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unlike with 'checkpoints'?
doc/release-notes.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps add "To avoid releasing software with a 'too recent' default 'assumevalid', checks will never be skipped for blocks that are only a week or less away from the most-work known tip.". Or something of the sort.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, I'm now convinced about using the mechanism, but not "To avoid releasing software with a 'too recent' default 'assumevalid'", rather "To avoid using being fooled with too recent invalid 'assumevalid'".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the fine details of the implementation are of general interest for the release nodes. If people think this is less secure than it is -- thats not the end of the world, and a separate FAQ would be fine if there is confusion/concern.
src/validation.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pindexBestHeader->GetAncestor(pindex->nHeight) == pindex could be moved to first if. Maybe with no gain and perhaps this is clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That test should pretty much never fail, so it runs later to avoid wasting time trying it for blocks past the assumedvalid point.
|
utACK 739983d modulo small nits that I guess can be fixed later. |
|
utACK 739983db5dd799424738bc752eee7721fc94f1fb if we change the time for comparison to anything above 2 weeks, though open to further discussion on IRC. |
|
Another thing that just occurred to me - do we want -checkpoints=0 to imply -assumevalid=0? They seem rather equivalent in usage.
…On January 12, 2017 3:09:16 PM PST, "Jorge Timón" ***@***.***> wrote:
jtimon commented on this pull request.
> + // We've been configured with the hash of a block which has
been externally verified to have a valid history.
+ // A suitable default value is included with the software and
updated from time to time. Because validity
+ // relative to a piece of software is an objective fact these
defaults can be easily reviewed.
+ // This setting doesn't force the selection of any particular
chain but makes validating some faster by
+ // effectively caching the result of part of the
verification.
+ BlockMap::const_iterator it =
mapBlockIndex.find(hashAssumeValid);
+ if (it != mapBlockIndex.end()) {
+ if (it->second->GetAncestor(pindex->nHeight) == pindex) {
+ // This block is a member of the assumed verified
chain: potentially disable script checks
+ // The equivalent time check discourages hashpower
from extorting the network via DOS attack
+ // into accepting an invalid block through telling
users they must manually set assumevalid.
+ // Requiring a software change or burrying the
invalid block, regardless of the setting, makes
+ // it hard to hide the implication of the demand.
This also avoids having release candidates
+ // that are hardly doing any signature verification
at all in testing without having to
+ // artificially set the default assumed verified
block further back.
+ fScriptChecks =
(GetBlockProofEquivalentTime(*pindexBestHeader, *pindex,
*pindexBestHeader, chainparams.GetConsensus()) < 60 * 60 * 24 * 7);
Micro-nit: regardless of the value chosen, ```60 * 60 * 24 * 7``` could
use a const or #define.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#9484
|
|
Actually I vote no - we dont need more parameter interactions and if folks are using -checkpoints=0 to avoid trusting the software they're running without reading the changelog (or the realease notes), then they're really not getting what they want. But others might have a different view? |
|
Tested this with -reindex-chainstate -dbcache=4096 -par=8 |
|
Updated, added a test against nMinimumChainWork to address some concerns, and moved the depth check to two weeks to satisfy Matt. I think it's overkill (and it would be better to have some different sanity checks) but it could be reduced in the future; and it's still better than we are now. |
src/validation.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/the that/that the
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
This disentangles the script validation skipping from checkpoints. A new option is introduced "assumevalid" which specifies a block whos ancestors we assume all have valid scriptsigs and so we do not check them when they are also burried under the best header by two weeks worth of work. Unlike checkpoints this has no influence on consensus unless you set it to a block with an invalid history. Because of this it can be easily be updated without risk of influencing the network consensus. This results in a massive IBD speedup. This approach was independently recommended by Peter Todd and Luke-Jr since POW based signature skipping (see PR#9180) does not have the verifiable properties of a specific hash and may create bad incentives. The downside is that, like checkpoints, the defaults bitrot and older releases will sync slower. On the plus side users can provide their own value here, and if they set it to something crazy all that will happen is more time will be spend validating signatures. Checkblocks and checklevel are also moved to the hidden debug options: Especially now that checkblocks has a low default there is little need to change these settings, and users frequently misunderstand them as influencing security or IBD speed. By hiding them we offset the space added by this new option.
|
utACK e440ac7 Travis may need a kick |
|
utACK e440ac7 |
|
utACK e440ac7 , will also try to find time to test |
|
tested ACK. I've written a new qa testcase that exercises the assumevalid parameter here: jnewbery@fffb047 Tests:
|
| consensus.nMinimumChainWork = uint256S("0x0000000000000000000000000000000000000000002cb971dd56d1c583c20f90"); | ||
|
|
||
| // By default assume that the signatures in ancestors of this block are valid. | ||
| consensus.defaultAssumeValid = uint256S("0x0000000000000000030abc968e1bd635736e880b946085c93152969b9a81a6e2"); //447235 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 0x0000000000000000030abc968e1bd635736e880b946085c93152969b9a81a6e2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 0000000000000000030abc968e1bd635736e880b946085c93152969b9a81a6e2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 0000000000000000030abc968e1bd635736e880b946085c93152969b9a81a6e2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
post-merge ACK 0000000000000000030abc968e1bd635736e880b946085c93152969b9a81a6e2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK the block, but NACK making this on by default. I love the feature, but I think there needs to be express consent by the user to avoid setting a potentially dangerous precedent. Also, it would be nice to allow the user to stipulate their own block by which to do this - perhaps they trust a particular node on the network and would based their block chosen on that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rebroad This is possible, please read the docs. Also, if you are unsure about the block hash, you can always set -noassumevalid
|
Edit: ACK. |
|
Please include @jnewbery's tests. |
Adds a qa testcase testing the new "-assumevalid" option. The testcase builds a chain that includes and invalid signature for one of the transactions and sends that chain to three nodes: - node0 has no -assumevalid parameter and rejects the invalid chain. - node1 has -assumevalid set and accepts the invalid chain. - node2 has -assumevalid set but the invalid block is not buried deep enough to assume invalid, and so rejects the invalid chain.
|
Neat tests, I was surprised at how small it is for what it does. |
|
utACK 7b5e3fe |
|
did some additional light testing... still ACK |
|
Tested a full reindex together with #9512 with asan/lsan/ubsan, all good. |
|
utACK 7b5e3fe |
|
|
||
| Removal of Priority Estimation | ||
| ------------------------------ | ||
| ------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
| // Only if ALL inputs pass do we perform expensive ECDSA signature checks. | ||
| // Helps prevent CPU exhaustion attacks. | ||
|
|
||
| // Skip ECDSA signature verification when connecting blocks before the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no longer ECDSA?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Script verification includes ECDSA verification, but does much more.
|
@rebroad The reasoning is that people are already relying on peer review to make sure the consensus logic isn't being changed (which could also lead to invalid blocks being accepted), and that is much harder than checking whether a single block hash is part of a valid chain. This is different from chrckpoints, which do have an effect beyond skipping validation, and need much more careful choosing. |
When syncing the chain, if an 'assumed valid' block has been configured, don't validate blocks that come in until a complete chain from the tip of the main chain to the assumed valid block has been accumulated in a temporary block column family. At that point, if a valid hash chain of blocks can be constructed from the main chain's tip to the assumed valid block, absorb all the blocks without verifying them. This is based on the design of assumed valid blocks from bitcoin, see bitcoin/bitcoin#9484 for more details.
This disentangles the script validation skipping from checkpoints.
A new option is introduced "assumevalid" which specifies a block whos
ancestors we assume all have valid scriptsigs and so we do not check
them when they are also burried under the best header by a weeks
worth of work.
Unlike checkpoints this has no influence on consensus unless you set
it to a block with an invalid history. Because of this it can be
easily be updated without risk of influencing the network consensus.
This results in a massive IBD speedup.
This approach was independently recommended by Peter Todd and Luke-Jr
since POW based signature skipping (see PR#9180) does not have the
verifiable properties of a specific hash and may create bad incentives.
The downside is that, like checkpoints, the defaults bitrot and older
releases will sync slower. On the plus side users can provide their
own value here, and if they set it to something crazy all that will
happen is more time will be spend validating signatures.
Checkblocks and checklevel are also moved to the hidden debug options:
Especially now that checkblocks has a low default there is little need
to change these settings, and users frequently misunderstand them as
influencing security or IBD speed. By hiding them we offset the
space added by this new option.