Fixes to MEDIUM severity issues from Jeb's report#382
Fixes to MEDIUM severity issues from Jeb's report#382philippecamacho wants to merge 9 commits intocelo-integration-rebase-16.2from
Conversation
There was a problem hiding this comment.
Code Review
This pull request refactors the Espresso batch buffering and streaming logic by replacing internal capacity-based limits with a MaxBatchOutOfOrder bound. This change simplifies the BatchStreamer by removing the complex skip and rewind mechanisms previously used when the buffer was full. The PR also significantly improves the testing infrastructure by enabling parallel execution for integration tests and reorganizing the justfile with more granular test groups. Documentation has been updated to reflect the new test commands and parallel execution options. One piece of feedback suggests cleaning up the NewBatchBuffer constructor by removing the now-unused capacity parameter.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
| func NewBatchBuffer[B Batch](capacity uint64) BatchBuffer[B] { | ||
| return BatchBuffer[B]{ | ||
| batches: []B{}, | ||
| capacity: capacity, |
There was a problem hiding this comment.
I find this previously implementation a bit baffling.
I was unaware of this previous implementation. I'm not sure why we'd store the capacity of the buffer separately from the buffer since the slice type in golang has a builtin capacity already.
We could have easily created the buffer with the given capacity via:
batches: make([]B, 0, capacity)Then we would have been able to check the stored capacity with:
cap(x.batches)
We would need corresponding machinery on the batcher side to ensure it detects this situation. Otherwise as soon as one batch gets sufficiently delayed for whatever reason, the chain will stall. |
| fallbackBatchPos: originBatchPos + 1, | ||
| BatchBuffer: NewBatchBuffer[B](BatchBufferCapacity), | ||
| fallbackBatchPos: originBatchPos, | ||
| BatchBuffer: NewBatchBuffer[B](MaxBatchOutOfOrder), |
There was a problem hiding this comment.
Seems odd to replace the parameter for the NewBatchBuffer call, when we've removed the capacity value that gets stored.
We should probably just remove the parameter entirely, as it this makes it seem that the value passed here is important.
|
Closed in favor of EspressoSystems/espresso-streamers#5 |
Closes tickets related to the MEDIUM severity issues from Jeb's report
This PR:
skipPos/rewind mechanism with a simpler approach — batches whose number exceedsBatchPos + MaxBatchOutOfOrderare dropped on arrival inprocessEspressoTransaction. This bounds the buffer to at mostMaxBatchOutOfOrderentries without needing theErrAtCapacity/skipPos/ rewind machinery.fallbackBatchPosinitialization bug:NewEspressoStreamerpreviously setfallbackBatchPos = originBatchPos + 1, butReset()computesBatchPos = fallbackBatchPos + 1. This meant aNewfollowed byResetwould setBatchPostooriginBatchPos + 2, skipping one batch. Fixed by settingfallbackBatchPos = originBatchPossoNewandResetproduce the sameBatchPost.Parallel()to integration tests: all Espresso integration tests now declaret.Parallel()espresso-streamer-unit-testsjustfile target for fast iteration on streamer/buffer unit testsThis PR does not:
Address other fixes suggestions from Jeb's report. Another PR will be created for the low severity issues.
Key places to review:
Changes to the streamer
How to test this PR:
just espresso-streamer-unit-tests)