Skip to content

Fix Oversized Data transaction error#299

Merged
willmeister merged 9 commits intomasterfrom
289_fix_oversized_transaction_error
Oct 1, 2020
Merged

Fix Oversized Data transaction error#299
willmeister merged 9 commits intomasterfrom
289_fix_oversized_transaction_error

Conversation

@willmeister
Copy link
Copy Markdown

Description

Note: This builds off of the fix_tx_and_root_batch_ordering_issue branch, so it contains the changes from #296

This change exactly calculates the size of the L1 Tx to be submitted for a tx sent to CanonicalTransactionChain.appendSequencerBatch(...) allowing batch creation to maximize the number of L2 txs to roll up without exceeding the maximum tx size that can be processed by the L1 chain.

Questions

  • Are there any other fields that contribute to serialized L1 or L2 tx size that I'm missing?

Metadata

Fixes

Contributing Agreement

- 32 for value
- 32 for R (signature)
- 32 for S (signature)
- 2 (at most) for V (signature)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this always be two, ie padded to 2 bytes if it is smaller?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, so this is the L1 tx, so we won't be padding it like the L2 ones. I could be wrong, but I think V is only 1 byte for mainnet txs because Chain ID is < 255. We only needed to worry about padding for our L2 Chain ID of 420 since it requires more than 1 byte but is able to be represented with 3 hex chars instead of requiring all 4. Either way, we should be fine if we just assume V always takes 2 bytes -- our goal is to not underestimate. Being over by 1 byte worst case should be fine.

Copy link
Copy Markdown
Contributor

@karlfloersch karlfloersch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yay this looks great! My one comment was that in order to get a precise estimate of the ABI encoded calldata you could consider ABI encoding the inputs. It feels a little cheap considering it should be easy to calculate (using our brains) the total number of bytes in each tx calldata, but the upside is it might be a bit more general purpose if we need to change function inputs down the line.

That said, take the suggestion or leave it I think this PR is looking good to go and definitely was a great catch that not only are we getting extra bytes from the standard Ethereum transaction, but also we're getting tons of extra bytes from the standard ABI encoding scheme which is used. So inefficient!!!

This LGTM!! 🎉

L1_TO_L2_TRANSACTION_QUEUE_CONTRACT_ADDRESS=0xA1B22bF35196AFCE0927D94ce8ad4C4b7bb6F005 # (Required) The address of the L1ToL2TransactionQueue contract
SAFETY_TRANSACTION_QUEUE_CONTRACT_ADDRESS=0x3a3BEFB4942cF20A69C8eb4FDB8957223Da57fa2 # (Required) The address of the SafetyTransactionQueue contract
L1_CHAIN_DATA_PERSISTER_DB_PATH=/mnt/l1-node # (Required) The filepath where to locate (or create) the L1 Chain Data Persister LevelDB database
L1_EARLIEST_BLOCK=0 # (Required) The earliest block to sync on L1 to start persisting data
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh @willmeister I might have forgotten to mention this! @tynes & I discovered that if variables in the env files are set to:

ENV_VAR=                                                 # Some comment

The env variable seems to actually be getting set to the comment!!!! This is super scary and could cause some very hard to track down bugs. I don't see any instances in this env file but thought I'd mention it because it could cause grief down the line

- 32 (timestamp)
- 32 (block number)
- 32 (starts at index)
- 32 (number of txs bytes elements)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ben-chain you might be good at double checking to make sure there's nothing missing in these calculations. I don't spot any issues off the top of my head though

if (totalCalldataBytes > maxBatchCalldataBytes) {
totalL1TxBytes +=
roundToNearestMultipleOf32(rowBytes) +
L1_ROLLUP_BATCH_TX_BYTES_PER_L2_TX
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One idea that I had just reading this is you could potentially even try creating an Ethers canonicalTxChain: Contract object and literally calling canonicalTxChain.interface. encodeFunctionData(batch) to get the length of the ABI encode object. This is definitely less efficient but it at least does remove some human error.

That said I did double check the numbers so this should return the correct value, I believe

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be a good idea as a sanity check at the end, but I was a bit concerned about calling it every iteration on each tx. It's quite possible this concern is unfounded, though, and there would be minimal speed impact calling encode + sign hundreds of times in the worst case.

I'm happy to change this or to profile it if you think we should, though!

public static canonicalChainBatchMaxCalldataBytes(
defaultValue: number = 125_000
public static canonicalChainMaxBatchTxBytes(
defaultValue: number = 110_000
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice safety margin

@willmeister willmeister merged commit 9bd0ff4 into master Oct 1, 2020
@willmeister willmeister deleted the 289_fix_oversized_transaction_error branch October 1, 2020 17:33
snario pushed a commit that referenced this pull request Apr 14, 2021
protolambda pushed a commit to protolambda/optimism that referenced this pull request May 1, 2022
The sync start algorithm has been modified to find both a Safe Block
and Unsafe Block. 

The FindL2Heads function first finds the Unsafe Block, and then continues
to walk back the L2 chain until it finds and L2 block that can be completely
derived from the sequence window that has not been changed in the
reorg (i.e. the last block of the window was the reorg base).

This FindSyncStart function previously only found the Unsafe Block
@mslipper mslipper mentioned this pull request May 16, 2022
theochap pushed a commit that referenced this pull request Dec 10, 2025
* feat: dockerize trusted-sync

fix: tag image

chore: move dockerfiles to example dir

clean up dockerfile

fix: dockerfile

* feat: docker publish github action

* feat: docker publish github action
Zena-park pushed a commit to tokamak-network/optimism that referenced this pull request Dec 30, 2025
…sm#299

We advise the manual validation through the require check within each function to be omitted, optimizing the code's gas cost.
theochap pushed a commit that referenced this pull request Jan 15, 2026
### Description

Introduces compressors into `op-alloy-protocol`, cleaning up the brotli
and zlib utility methods as well as the `ChannelOut` type.

Co-authored-by: Matthias Seitz <matthias.seitz@outlook.de>
emhane pushed a commit that referenced this pull request Feb 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug :: Oversized transaction error

3 participants