-
Notifications
You must be signed in to change notification settings - Fork 38.7k
Divergence between 32- and 64-bit when hashing >4GB affects gettxoutsetinfo
#7848
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
What would be the downside of a forced 64bit type on 32bit platform? I guess the performance reduction and the slightly higher memory consumption is totally negligible? Would fixing it "on the other side" (detect size_t overflow while hashing, force the 32bit reset on 64bit platforms) not involve less risks? |
|
We should replace the utxo hash reported by gettxoutsetinfo by a Merkle
root of a tree whose leaves are the UTXO entries in a simple serialized
format. That would allow producing short proofs about the existance or
non-existance of certain entries to anyone who gets such a hash from a
trusted source.
This isn't hard to do using a variation of the algorithm in
consensus/merkle.cpp, but it's not my priority now.
|
|
@laanwj Nice catches. utACK. @jonasschnelli I think it's not worth the trouble. For platforms constrained enough for the change to be significant, I suspect that a separate hashing implementation would've already been patched in. |
|
@sipa I know you have bigger plans for this. Do you disagree, though, that this is an improvement? I'm using @jonasschnelli I'd say doing the hashing as it is supposed to be done, with counters of the appropriate width - as done here - is less risky than trying to hack something with overflow detection? Adding a 64-bit with 32-bit value is cheap on 32-bit architectures as well, it even involves overflow detection (the carry flag) internally. |
|
The hash function code should be correct with large inputs regardless of what the utxoset hash stuff returns. utACK. |
|
@laanwj Oops, I just responded to @jonasschnelli's comment, not seeing there was a patch too. Yes, uint64_t should definitely be used for at least SHA1, SHA256 and RIPEMD160. In theory, we'd need to use a 128-bit integer for SHA512 (as it writes a 128-bit length descriptor in the padding), but I guess we'll never hit that. |
|
there can be an assert to catch the overflow in the sha512 case, if we want to be pedantic? |
Good catch! Although implementing actual 128-bit counting is just as much work as adding an overflow assertion, I'd personally say adding an assert is preferable here, as we can never test this behavior. |
|
The assert should probably exist for SHA1/256/RIPEMD160 as well, as those are undefined for longer messages. |
|
Hmm looking at the actual code I'll leave that for another time. At least this is trivially correct. I feel that the risk of someone hashing 18,446,744 TB of data is extremely low in the foreseeable future (even with the unfortunate rate of UTXO growth that will take a while), at least smaller than the risk of me introducing bugs there. |
|
utACK 088c270c2928089f1903db0ff01c82eaaa5f6e45 |
|
Concept ACK |
Byte counts for SHA256, SHA512, SHA1 and RIPEMD160 must be 64 bits. `size_t` has a different size per platform, causing divergent results when hashing more than 4GB of data.
088c270 to
64a2924
Compare
The key (transaction id for the following outputs) should be serialized to the HashWriter. This is a problem as it means different transactions in the same position with the same outputs will potentially result in the same hash. Fixes primary concern of bitcoin#7758.
5014862 to
28b400f
Compare
|
Tested ACK, on an OSX 64-bit machine. On a debug build on my machine, hashing the txids makes gettxoutsetinfo about ten seconds slower (3 minutes versus 2 minutes 50 seconds). |
I was seeing differerent UTXO hashes from
gettxoutsetinfoon my ARM nodes than on x86 (64 bit). After the initial panic had weared off, I eventually managed to trace this to the following:The counters for SHA256 and friends are supposed to be 64 bit.
size_t, the type currently used, is 32-bit on architectures with a 32-bit address space. So after processing 4GB of data, the counter will wrap around on 32-bit and the hashes will start to diverge compared to those computed on 64-bit architectures.I am sure this has no effect on any other use of hashes by the project, as there is no other place where >4GB of data is hashed in one go.
N.B. as this changes the output of
gettxoutsetinfoon many platforms anyway (needs mention in release notes) I've taken the liberty of addressing the direct concern in #7758 as well in a second commit.Before first commit
After first commit
After second commit