Ensure lookup sync checks caches correctly#5840
Conversation
| let Some(expected_blobs) = downloaded_block_expected_blobs.or_else(|| { | ||
| self.chain | ||
| .data_availability_checker | ||
| .num_expected_blobs(&block_root) |
There was a problem hiding this comment.
We can delete num_blobs_expected
|
a couple lookup tests are broken, but i think the tests are wrong? the scenarios looks like they're testing that we don't trigger a block or blob lookup if we have a block in the processing cache, when really we just want to suppress the block lookup |
|
fixed the pr comments here: 2b55017 |
|
Something I feel like I'm missing - is this addressing a bug that actually causes a lookup to get stuck? it looks like it's just an optimization where we are requesting blobs sooner In the extreme case -> all attestations arrive while the block is in the processing cache, so none arrive to trigger a lookup when it hits the DA cache. When the block eventually hits the DA cache, we should get a missing components that causes this lookup to progress, right? or is there a separate bug preventing this? |
|
The bug this PR is fixing is the blob requests not checking the block processing cache to figure out if the block is already downloaded. Consider a case where:
The blob request won't continue, because it's waiting for the block to download. However, if the block is not actually processing the request may be stuck. I think the root issue is that during import the block is removed from the da_checker but not from the processing cache until later.
However, I think blob_request should consider blocks that are both on the da_checker and processing_cache to be asymmetric with block_request logic. Otherwise, I think this can be a source of inconsistency if you sandwich other bugs. |
|
Thanks for the explanation :) |
|
@mergify queue |
✅ The pull request has been merged automaticallyDetailsThe pull request has been merged automatically at e498421 |
|
@mergify refresh |
✅ Pull request refreshed |
Issue Addressed
Sync lookup needs to have a view of what's currently being processed to prevent double downloads and not get lookups stuck.
Part of an issue in #5833 was caused because blob requests don't check into the processing cache (
reqresp_pre_import_cache). If the block is already processing, blob requests should continue.To prevent futures issues like this I think we should encapsulate the details of the beacon_chain caches to there. And then expose only the minimum information sync needs to know with a simple enum
BlockProcessStatus.Proposed Changes
get_block_process_statusto not leak cache details into sync lookupblob_lookup_request, consider blocks inNotValidatedstate as download and continue the download of blobs