Skip to content

Fix sent to finalized batch send failure condition#221

Merged
willmeister merged 7 commits intomasterfrom
YAS-604/FixSentToFinalizedFailureCondition
Aug 20, 2020
Merged

Fix sent to finalized batch send failure condition#221
willmeister merged 7 commits intomasterfrom
YAS-604/FixSentToFinalizedFailureCondition

Conversation

@willmeister
Copy link
Copy Markdown

Description

If our process fails in between a tx / state batch being SENT and being FINALIZED, we end up in an irrecoverable state. This fixes that by adding a separate finalizer polling service.

Also makes it so that pollers re-run immediately if there is more work to do.

Metadata

Fixes

Contributing Agreement

Copy link
Copy Markdown
Collaborator

@ben-chain ben-chain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Woohoo--looks good to me! I did notice one possible edge case around finalization vs. success which I wouldn't expect to happen much, but is technically possible depending on the L1 contracts' implementation.

My only other comment would be that the two finalizers seems to use extremely similar code and tests. It definitely doesn't seem crazy to me to have some L1TransactionFinalizer which the two extend, but not strictly necessary.

try {
const receipt: TransactionReceipt = await this.canonicalTransactionChain.provider.waitForTransaction(
txHash,
1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One concern to watch out for here is that (at least from a generic EVM perspective) this does not necessarily guarantee that the transaction will not have failed if it has a status of FINALIZED, because a 2-block reorg could technically make it fail only after this await.

Not sure how hard we want to drill into this sort of failure case ATM, but thought I'd surface. One potential thing to do here would be to store the receipt blocknumber and blockhash, and check for equality once FINALIZED, but not really sure if best solution.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern in the submitter is just to make sure it has been submitted and, if run, it will succeed. A confirm should do this. Whether or not it actually gets mined is more of a concern of the finalizer

That said, yes, in the reorg / censorship case, our clock skew will make the tx fail after the fact. If that's the case, manual intervention will probably be required in the near term, as we'll have to revert geth, delete all unconfirmed outputs, replay transactions, resubmit.

Copy link
Copy Markdown
Author

@willmeister willmeister Aug 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noting that in the case listed at the end there, we should get repeated loud alerts from this line (and the same line in the other file).

We'll definitely have to come up with a playbook for these sort of predictable but not preventable cases.

I suppose another option on this one is to actually wait for finalization in the submitter (but update it in the DB in the finalizer). The problem with that is that we'd only be able to submit one rollup batch per blocks, which kills the scalability aspect.

try {
const receipt: TransactionReceipt = await this.stateCommitmentChain.provider.waitForTransaction(
txHash,
1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same note as above WRT >1 block reorgs. Also noting very similar functionality if there's any possibility to easily share.

@willmeister willmeister merged commit acd1555 into master Aug 20, 2020
@willmeister willmeister deleted the YAS-604/FixSentToFinalizedFailureCondition branch August 20, 2020 20:37
protolambda pushed a commit to protolambda/optimism that referenced this pull request May 1, 2022
This still has concurrency when fetching receipts and does retries, but
it removes the cache in front of the L1 client and does not limit the total
amount of in flight requests.
@mslipper mslipper mentioned this pull request May 16, 2022
bap2pecs pushed a commit to babylonlabs-io/optimism that referenced this pull request Jul 31, 2024
xibao-nr pushed a commit to node-real/combo-optimism that referenced this pull request Feb 19, 2025
Zena-park added a commit to tokamak-network/optimism that referenced this pull request Dec 30, 2025
emhane pushed a commit that referenced this pull request Feb 2, 2026
Description:

- This PR adds scaffold MDBX proofs storage with impl to unblock
parallel work.
- Unblock multiple developers and avoid merge conflicts while the MDBX
backend is being built.
- Provide a compilable stub that wires types and env init, without
committing to final storage logic.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants