Conversation
When running the import, currently blocks are loaded in batches into a `seq` then passed to the importer as such. In reality, blocks are still processed one by one, so the batching does not offer any performance advantage. It does however require that the client wastes memory, up to several GB, on the block sequence while they're waiting to be processed. This PR introduces a persister that accepts these potentially large blocks one by one and at the same time removes a number of redundant / unnecessary copies, assignments and resets that were slowing down the import process in general.
| # the cost of failure is low. | ||
| # TODO Figure out the right balance for header fields - in particular, if | ||
| # we receive instruction from the CL while syncing that a block is | ||
| # CL-valid, do we skip validation while "far from head"? probably yes. |
There was a problem hiding this comment.
The main thing which needs to be checked is likely the basic link back to the parent blocks each time matching the block hash, i.e. the exact set of conditions that the engine API requires even an unsynced EL to perform. Beyond that isn't critical to security, and can be optimized to taste.
There was a problem hiding this comment.
yeah, I guess a rule the EL can follow is that if the block is earlier than finalized (according to the cl), we do just parent-hash checking - if it's more recent than finalized, we do full checking with state root and everything - that should be pretty safe given that all the validators approved it already.
Anyway, this comment is just copy-pasted from the earlier code - it's slightly out of date because forkedchain no longer uses this code from what I remember
When running the import, currently blocks are loaded in batches into a
seqthen passed to the importer as such.In reality, blocks are still processed one by one, so the batching does not offer any performance advantage. It does however require that the client wastes memory, up to several GB, on the block sequence while they're waiting to be processed.
This PR introduces a persister that accepts these potentially large blocks one by one and at the same time removes a number of redundant / unnecessary copies, assignments and resets that were slowing down the import process in general.