Fix Oversized Data transaction error#299
Conversation
… batch queries are ordered by the same fields
…tx batch transactions
| - 32 for value | ||
| - 32 for R (signature) | ||
| - 32 for S (signature) | ||
| - 2 (at most) for V (signature) |
There was a problem hiding this comment.
Shouldn't this always be two, ie padded to 2 bytes if it is smaller?
There was a problem hiding this comment.
Ahh, so this is the L1 tx, so we won't be padding it like the L2 ones. I could be wrong, but I think V is only 1 byte for mainnet txs because Chain ID is < 255. We only needed to worry about padding for our L2 Chain ID of 420 since it requires more than 1 byte but is able to be represented with 3 hex chars instead of requiring all 4. Either way, we should be fine if we just assume V always takes 2 bytes -- our goal is to not underestimate. Being over by 1 byte worst case should be fine.
karlfloersch
left a comment
There was a problem hiding this comment.
Yay this looks great! My one comment was that in order to get a precise estimate of the ABI encoded calldata you could consider ABI encoding the inputs. It feels a little cheap considering it should be easy to calculate (using our brains) the total number of bytes in each tx calldata, but the upside is it might be a bit more general purpose if we need to change function inputs down the line.
That said, take the suggestion or leave it I think this PR is looking good to go and definitely was a great catch that not only are we getting extra bytes from the standard Ethereum transaction, but also we're getting tons of extra bytes from the standard ABI encoding scheme which is used. So inefficient!!!
This LGTM!! 🎉
| L1_TO_L2_TRANSACTION_QUEUE_CONTRACT_ADDRESS=0xA1B22bF35196AFCE0927D94ce8ad4C4b7bb6F005 # (Required) The address of the L1ToL2TransactionQueue contract | ||
| SAFETY_TRANSACTION_QUEUE_CONTRACT_ADDRESS=0x3a3BEFB4942cF20A69C8eb4FDB8957223Da57fa2 # (Required) The address of the SafetyTransactionQueue contract | ||
| L1_CHAIN_DATA_PERSISTER_DB_PATH=/mnt/l1-node # (Required) The filepath where to locate (or create) the L1 Chain Data Persister LevelDB database | ||
| L1_EARLIEST_BLOCK=0 # (Required) The earliest block to sync on L1 to start persisting data |
There was a problem hiding this comment.
Oh @willmeister I might have forgotten to mention this! @tynes & I discovered that if variables in the env files are set to:
ENV_VAR= # Some comment
The env variable seems to actually be getting set to the comment!!!! This is super scary and could cause some very hard to track down bugs. I don't see any instances in this env file but thought I'd mention it because it could cause grief down the line
| - 32 (timestamp) | ||
| - 32 (block number) | ||
| - 32 (starts at index) | ||
| - 32 (number of txs bytes elements) |
There was a problem hiding this comment.
@ben-chain you might be good at double checking to make sure there's nothing missing in these calculations. I don't spot any issues off the top of my head though
| if (totalCalldataBytes > maxBatchCalldataBytes) { | ||
| totalL1TxBytes += | ||
| roundToNearestMultipleOf32(rowBytes) + | ||
| L1_ROLLUP_BATCH_TX_BYTES_PER_L2_TX |
There was a problem hiding this comment.
One idea that I had just reading this is you could potentially even try creating an Ethers canonicalTxChain: Contract object and literally calling canonicalTxChain.interface. encodeFunctionData(batch) to get the length of the ABI encode object. This is definitely less efficient but it at least does remove some human error.
That said I did double check the numbers so this should return the correct value, I believe
There was a problem hiding this comment.
I think this could be a good idea as a sanity check at the end, but I was a bit concerned about calling it every iteration on each tx. It's quite possible this concern is unfounded, though, and there would be minimal speed impact calling encode + sign hundreds of times in the worst case.
I'm happy to change this or to profile it if you think we should, though!
| public static canonicalChainBatchMaxCalldataBytes( | ||
| defaultValue: number = 125_000 | ||
| public static canonicalChainMaxBatchTxBytes( | ||
| defaultValue: number = 110_000 |
The sync start algorithm has been modified to find both a Safe Block and Unsafe Block. The FindL2Heads function first finds the Unsafe Block, and then continues to walk back the L2 chain until it finds and L2 block that can be completely derived from the sequence window that has not been changed in the reorg (i.e. the last block of the window was the reorg base). This FindSyncStart function previously only found the Unsafe Block
* feat: dockerize trusted-sync fix: tag image chore: move dockerfiles to example dir clean up dockerfile fix: dockerfile * feat: docker publish github action * feat: docker publish github action
…sm#299 We advise the manual validation through the require check within each function to be omitted, optimizing the code's gas cost.
### Description Introduces compressors into `op-alloy-protocol`, cleaning up the brotli and zlib utility methods as well as the `ChannelOut` type. Co-authored-by: Matthias Seitz <matthias.seitz@outlook.de>
Description
Note: This builds off of the
fix_tx_and_root_batch_ordering_issuebranch, so it contains the changes from #296This change exactly calculates the size of the L1 Tx to be submitted for a tx sent to
CanonicalTransactionChain.appendSequencerBatch(...)allowing batch creation to maximize the number of L2 txs to roll up without exceeding the maximum tx size that can be processed by the L1 chain.Questions
Metadata
Fixes
Contributing Agreement