-
Notifications
You must be signed in to change notification settings - Fork 780
Closed
Labels
P:bandwidth-optimizationPriority: Optimize bandwidth usagePriority: Optimize bandwidth usage
Description
Was tendermint/tendermint#9922
It has been identified that votes, block parts and transaction propagation in the mempool use more data than what should be needed to reach a decision (tendermint/tendermint#9706)
In this task we determine by which factor this inefficiencies happen on each kind of message (which will let us prioritize optimization of each of them) and what are the sources of the inefficiencies (which will point the direction to the fixes).
Some questions to be answered
- do nodes forget having send messages and send them again?
- they don't forget having sent (except for a case that has already been fixed), but sometimes require getting the message from the other node back, before stopping sending it.
- do all nodes needlessly send the same messages to the same nodes?
- given the unstructured nature of the network, yes, the same message is received multiple times from multiple sources.
- is the "has votes" message effective?
- yes, as long as it is delivered before the votes itself, which is not normally the case. This has been addressed in consensus: optimize vote and block part gossip with HasProposalBlockPartMessage and random sleeps #904.
Tasks:
- Add metrics to track how many times a node receives duplicate votes (New metrics to track duplicate votes and block parts #896)
- Add metric to track how many times a node receives duplicate block parts (New metrics to track duplicate votes and block parts #896)
- Add metric to track how many times a node receives duplicate transactions (present in mempool cache) (mempool: Add metric to measure how many times a tx was received #637)
- Compile the results (see the discussion in consensus: optimize vote and block part gossip with HasProposalBlockPartMessage and random sleeps #904)
- Add logs to identify the sources of duplicate votes (metrics and code analysis were enough)
- Add logs to identify the sources of duplicate block parts (metrics and code analysis were enough)
- Add logs to identify the sources of duplicate duplicate transactions (present in mempool cache) (metrics and code analysis were enough)
DoD:
- We identified why duplication happens.
- This information serves as input to optimizing the message exchange: Bandwidth optimization #30
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
P:bandwidth-optimizationPriority: Optimize bandwidth usagePriority: Optimize bandwidth usage