Fix sent to finalized batch send failure condition#221
Conversation
… occurs between batches being SENT and FINALIZED, making polling tasks re-run immediately if there is more work to do
ben-chain
left a comment
There was a problem hiding this comment.
Woohoo--looks good to me! I did notice one possible edge case around finalization vs. success which I wouldn't expect to happen much, but is technically possible depending on the L1 contracts' implementation.
My only other comment would be that the two finalizers seems to use extremely similar code and tests. It definitely doesn't seem crazy to me to have some L1TransactionFinalizer which the two extend, but not strictly necessary.
| try { | ||
| const receipt: TransactionReceipt = await this.canonicalTransactionChain.provider.waitForTransaction( | ||
| txHash, | ||
| 1 |
There was a problem hiding this comment.
One concern to watch out for here is that (at least from a generic EVM perspective) this does not necessarily guarantee that the transaction will not have failed if it has a status of FINALIZED, because a 2-block reorg could technically make it fail only after this await.
Not sure how hard we want to drill into this sort of failure case ATM, but thought I'd surface. One potential thing to do here would be to store the receipt blocknumber and blockhash, and check for equality once FINALIZED, but not really sure if best solution.
There was a problem hiding this comment.
My concern in the submitter is just to make sure it has been submitted and, if run, it will succeed. A confirm should do this. Whether or not it actually gets mined is more of a concern of the finalizer
That said, yes, in the reorg / censorship case, our clock skew will make the tx fail after the fact. If that's the case, manual intervention will probably be required in the near term, as we'll have to revert geth, delete all unconfirmed outputs, replay transactions, resubmit.
There was a problem hiding this comment.
Just noting that in the case listed at the end there, we should get repeated loud alerts from this line (and the same line in the other file).
We'll definitely have to come up with a playbook for these sort of predictable but not preventable cases.
I suppose another option on this one is to actually wait for finalization in the submitter (but update it in the DB in the finalizer). The problem with that is that we'd only be able to submit one rollup batch per blocks, which kills the scalability aspect.
| try { | ||
| const receipt: TransactionReceipt = await this.stateCommitmentChain.provider.waitForTransaction( | ||
| txHash, | ||
| 1 |
There was a problem hiding this comment.
Same note as above WRT >1 block reorgs. Also noting very similar functionality if there's any possibility to easily share.
This still has concurrency when fetching receipts and does retries, but it removes the cache in front of the L1 client and does not limit the total amount of in flight requests.
Description: - This PR adds scaffold MDBX proofs storage with impl to unblock parallel work. - Unblock multiple developers and avoid merge conflicts while the MDBX backend is being built. - Provide a compilable stub that wires types and env init, without committing to final storage logic.
Description
If our process fails in between a tx / state batch being
SENTand beingFINALIZED, we end up in an irrecoverable state. This fixes that by adding a separate finalizer polling service.Also makes it so that pollers re-run immediately if there is more work to do.
Metadata
Fixes
Contributing Agreement