feat: add entry map to PartialStorageMap#1878
Conversation
2647c0e to
ad222bb
Compare
There was a problem hiding this comment.
LGTM, great find! Let's wait for a second review before merging. For everyone else's context, this was found from a test we have on the client while trying to update everything to the latest next. The account fails to update when deploying a new account which has storage maps: partial_account_to_full cannot really recreate the correct storage commitment because the partial account has only hashed keys.
We should look into replicating the scenario in a test for this repo.
There might also be an alternative solution to this PR, but not sure.
| /// Returns a new instance of partial storage map with the specified partial SMT and stored | ||
| /// entries. | ||
| pub fn new(partial_smt: PartialSmt, map: BTreeMap<Word, Word>) -> Self { | ||
| PartialStorageMap { partial_smt, map } |
There was a problem hiding this comment.
We should probably check that any key in map is also included as a leaf in partial_smt (keeping the hash in mind).
bobbinth
left a comment
There was a problem hiding this comment.
Looks good! Thank you for bringing up this issue! I left a couple of comments inline - basically wondering if we should have a more systematic approach to handling this.
| pub fn add( | ||
| &mut self, | ||
| key: &Word, | ||
| value: &Word, | ||
| witness: StorageMapWitness, | ||
| ) -> Result<(), MerkleError> { |
There was a problem hiding this comment.
A couple of thoughts here:
The value should already be in the witness. We should be able to get it via the StorageMapWitness::find() method. So, there is no need to pass the value separately, and this could look like:
pub fn add(
&mut self,
raw_key: &Word,
witness: StorageMapWitness,
) -> Result<(), MerkleError>But I also wonder if the raw_key should be a part of the StorageMapWitness. Basically, it could look something like:
pub struct StorageMapWitness {
raw_keys: Vec<Word>,
proof: SmtProof,
}We could also create a newtype for raw keys - something like RawKey(Word) - I think @PhilippGackstatter mentioned this somewhere.
One thing I'm not sure about is big of a refactoring this is (and if this affects the node in some way cc @drahnr).
There was a problem hiding this comment.
For reference, the issue is this one: #1864.
What to do here depends a bit on whether we want to revert these changes to PartialStorageMap in the future, or not. I'm leaning towards letting it only track hashed keys once we can remove the partial to full account conversion in the prover, and so I would only introduce minimal changes here that fix the issue. So I'd go with this for now:
pub fn add(
&mut self,
raw_key: &Word,
witness: StorageMapWitness,
) -> Result<(), MerkleError>| pub fn new(partial_smt: PartialSmt, map: BTreeMap<Word, Word>) -> Self { | ||
| PartialStorageMap { partial_smt, map } |
There was a problem hiding this comment.
Related to the above comment, technically, the only thing we need to pass into here is the set of raw keys. So, it could look something like:
pub fn new(partial_smt: PartialSmt, map: Vec<RawKey>) -> SelfBut again, not sure how big of a refactoring it is.
PhilippGackstatter
left a comment
There was a problem hiding this comment.
Great find and thank you for coming up with a fix, too!
I can reproduce the error with a test in miden-base, which we should probably add as part of this PR (see below.)
We only need the original keys in the partial storage map because of the awkward conversion from partial account back to full account introduced in #1840. The long-term idea was to investigate changing the representation of new public accounts to AccountDelta as well, so as to avoid this conversion. If we end up with that route, we should be able to revert the changes here. Then, it would again be sufficient for a partial storage map to only track hashed map keys.
Until that time, I think this PR is the right solution, with the changes that @bobbinth mentioned. It may even be worth doing #1864 (despite not having 0xMiden/crypto#430 yet) as a follow-up to this PR, because obviously the difference between unhashed and hashed map keys is quite subtle. If we want this PR to just fix the issue, I'd keep using Word and then introduce the newtype wrappers as a follow-up, since it affects a larger part of storage map and partial storage map API.
Test
The following test results in this error (at current next@2f18f6e5), caused by the incorrectly converted StorageMap from the PartialStorageMap. With this PR the test passes.
Error: failed to build proven transaction
Caused by:
proven transaction's final account commitment 0x3cb90f43acc6daff9348a847cd4a424db0bc0d9e29c53bac246528a81bdaffdb and account details commitment 0x31bcd21f02c885ffe550b059df8c38743a161b0ee731d50d6dc8fd7ffaf256ef must match
The test (I'd add this in test_account.rs in the storage test section):
/// Tests that the storage map updates for a _new public_ account in an executed and proven transaction
/// match up.
///
/// This is an interesting test case because for new public accounts the prover converts the partial
/// account into a full account as a temporary measure. Because of the additional hashing of map
/// keys in storage maps, this test ensures that the partial storage map is correctly converted into
/// a full storage map. If we end up representing new public accounts as account deltas, this test
/// can likely go away.
#[test]
fn proven_tx_storage_map_matches_executed_tx_for_new_account() -> anyhow::Result<()> {
// Build a public account so the proven transaction includes the account update.
let mock_slots = AccountStorage::mock_storage_slots();
let (mut account, seed) = AccountBuilder::new([1; 32])
.storage_mode(AccountStorageMode::Public)
.with_auth_component(Auth::IncrNonce)
.with_component(MockAccountComponent::with_slots(mock_slots.clone()))
.build()?;
// The index of the mock map in account storage is 2.
let map_index = 2u8;
// Fetch a random existing key from the map.
let StorageSlot::Map(mock_map) = &mock_slots[map_index as usize] else {
panic!("expected map");
};
let existing_key = mock_map.entries().next().unwrap().0;
let value0 = Word::from([3, 4, 5, 6u32]);
let code = format!(
"
use.mock::account
begin
# Update an existing key.
push.{value0}
push.{existing_key}
push.{map_index}
# => [index, KEY, VALUE]
call.account::set_map_item
exec.::std::sys::truncate_stack
end
"
);
let builder = ScriptBuilder::with_mock_libraries()?;
let source_manager = builder.source_manager();
let tx_script = builder.compile_tx_script(code)?;
let tx = TransactionContextBuilder::new(account.clone())
.tx_script(tx_script)
.account_seed(Some(seed))
.with_source_manager(source_manager)
.build()?
.execute_blocking()?;
let map_delta = tx.account_delta().storage().maps().get(&map_index).unwrap();
assert_eq!(
map_delta.entries().get(&LexicographicWord::new(*existing_key)).unwrap(),
&value0
);
let proven_tx = LocalTransactionProver::default().prove_dummy(tx.clone())?;
let AccountUpdateDetails::New(new_account) = proven_tx.account_update().details() else {
panic!("expected delta");
};
account.apply_delta(tx.account_delta())?;
for (idx, slot) in new_account.storage().slots().iter().enumerate() {
assert_eq!(slot, &account.storage().slots()[idx], "slot {idx} did not match");
}
Ok(())
}| pub fn add( | ||
| &mut self, | ||
| key: &Word, | ||
| value: &Word, | ||
| witness: StorageMapWitness, | ||
| ) -> Result<(), MerkleError> { |
There was a problem hiding this comment.
For reference, the issue is this one: #1864.
What to do here depends a bit on whether we want to revert these changes to PartialStorageMap in the future, or not. I'm leaning towards letting it only track hashed keys once we can remove the partial to full account conversion in the prover, and so I would only introduce minimal changes here that fix the issue. So I'd go with this for now:
pub fn add(
&mut self,
raw_key: &Word,
witness: StorageMapWitness,
) -> Result<(), MerkleError>|
The conclusion from the call was to try and put this PR on hold and address #1879 next, which should hopefully avoid the need to track the original keys in the partial storage map in the first place. |
…nto tomyrd-fix-partial-storage-map
There was a problem hiding this comment.
As mentioned in #1879 (comment), I would go ahead with this PR after all. It turned out that for this issue, we would need to have the original keys in the PartialStorageMap for new accounts.
Edit: I'm also happy to take over this PR if you have other things to do @tomyrd.
| PartialStorageMap { partial_smt } | ||
| /// Returns a new instance of partial storage map with the specified partial SMT and stored | ||
| /// entries. | ||
| pub fn new(partial_smt: PartialSmt, map: BTreeMap<Word, Word>) -> Self { |
There was a problem hiding this comment.
Would it work to do what @bobbinth suggested in another comment:
pub struct StorageMapWitness {
raw_keys: Vec<Word>,
proof: SmtProof,
}and then change this to PartialStorageMap::from_proofs(proofs: impl IntoIterator<Item = StorageMapWitness>)? That would be a nicer API and avoid the need to validate that the map has all the keys that the partial SMT tracks.
|
I went ahead with the proposed changes here since this will most likely be needed for #1879. I included the full For serialization, on the other hand, I went with the approach that minimizes the required data (i.e. only serialize keys and not values). |
igamigo
left a comment
There was a problem hiding this comment.
Looks good, thanks for picking it up. Will test this just in case something does not integrate as we expect it to and report back.
igamigo
left a comment
There was a problem hiding this comment.
LGTM! Works well with the tests that were failing before. Having a few other issues integrating some changes related to the lazy foreign account loading but I think we should go ahead and merge this.
bobbinth
left a comment
There was a problem hiding this comment.
Looks good! Thank you! I left a few small comments inline - once these are addressed, we are good to merge.
This PR fixes a bug we encountered in the client where
partial_storage_map_to_storage_mapwasn't generating the correctStorageMapfrom thePartialStorageMap. This is because the partial storage map only stored the values with its hashed keys, while the storage map needs to be built with the original keys (which then get hashed internally).To do this I added a
mapfield toPartialStorageMapthat stores the original key+value pairings.