Skip to content

Compact Block Propagation #7932

@cmwaters

Description

@cmwaters

Protocol Change Proposal

Summary

This is a proposal to optimize block propagation by gossiping only the hashes of each transaction.

Context

Block propagation is currently achieved by requesting transactions from the nodes mempool, constructing the block from the set of transactions and prior state, breaking the block into parts and gossiping these parts through the network alonside a proposal message. Receiving nodes reconstruct the block, verify it and if they are a validator, vote on it. Simultaneously, the mempool operates by broadcasting transactions throughout the network. This causes duplication of transaction messages throughout the network: at least once via the mempool and at least once via consensus.

The idea of gossiping "Compact" blocks is something employed by other networks such as Bitcoin (https://arxiv.org/pdf/2101.00378.pdf).

Proposal

At a high level, the consensus engine requests a set of transaction hashes from the mempool and combines this with the Header, Commit and Evidence (evidence could also be hashes as the evpool works in a similar manner) to produce a "compact block". This compact block would then be gossiped to peers alongside the proposal. This should be of a size that doesn't require chunking (however this could be evaluated at a later point). Receiving nodes of the compact block send the transaction hashes to the mempool. If the mempool already has the transactions it returns them to consensus so that consensus can reconstruct the block. If not, the mempool uses a request/response model with connected peers to fetch the missing transactions. Once complete, consensus can continue with verification and voting. If all the transactions can't be fetched within the consensus timeout, the node will prevote/precommit nil and move on to the next round.

The new request/response mechanism in the mempool could be done over a separate, higher priority channel than the gossiping of transactions.

Moving Forward

Before writing any design documentation, a rather rudimentary experiment to analyse the effectiveness of the change should be done first. This would be to write a custom node that for every block it receives from its peers, it counts how many of the transactions it already had in its mempool. This should be done on both high and low throughput networks. A high percentage indicates that there are significant gains to be made from making this optimization.

This proposal has some overlap with parts of #7922. I would think that such an optimization should be evaluated/implemented post Tendermint 1.0. As the current block chunking assumes nothing of the structure of blocks, it would still be possible to use this in the case of extremely large blocks.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Metadata

Metadata

Assignees

No one assigned

    Labels

    S:proposalStatus: Proposalstalefor use by stalebot

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions