Backfill gossip without buffering directly in LDK#1660
Merged
TheBlueMatt merged 5 commits intolightningdevkit:mainfrom Aug 16, 2022
Merged
Backfill gossip without buffering directly in LDK#1660TheBlueMatt merged 5 commits intolightningdevkit:mainfrom
TheBlueMatt merged 5 commits intolightningdevkit:mainfrom
Conversation
Contributor
valentinewallace
left a comment
There was a problem hiding this comment.
Looks good, basically :)
4aaa673 to
92b767b
Compare
Contributor
|
Looks good! I'm happy with this after we get a second reviewer. Feel free to squash |
This consolidates our various checks on peer buffer space into the `Peer` impl itself, making the thresholds at which we stop taking various actions on a peer more readable as a whole. This commit was primarily authored by `Valentine Wallace <vwallace@protonmail.com>` with some amendments by `Matt Corallo <git@bluematt.me>`.
92b767b to
1bd0303
Compare
Collaborator
Author
|
Squashed without diff. |
jkczyz
reviewed
Aug 15, 2022
Contributor
valentinewallace
left a comment
There was a problem hiding this comment.
ACK after squash
jkczyz
previously approved these changes
Aug 15, 2022
Instead of backfilling gossip by buffering (up to) ten messages at a time, only buffer one message at a time, as the peers' outbound socket buffer drains. This moves the outbound backfill messages out of `PeerHandler` and into the operating system buffer, where it arguably belongs. Not buffering causes us to walk the gossip B-Trees somewhat more often, but avoids allocating vecs for the responses. While its probably (without having benchmarked it) a net performance loss, it simplifies buffer tracking and leaves us with more room to play with the buffer sizing constants as we add onion message forwarding which is an important win. Note that because we change how often we check if we're out of messages to send before pinging, we slightly change how many messages are exchanged at once, impacting the `test_do_attempt_write_data` constants.
0d2504d to
86dcb5b
Compare
Collaborator
Author
|
Squashed without further changes. |
valentinewallace
previously approved these changes
Aug 15, 2022
Contributor
|
Ah, I think a warning may have been introduced in |
jkczyz
previously approved these changes
Aug 15, 2022
Contributor
Yeah, that's from addressing one of my comments. TIL you can avoid the explicit diff --git a/lightning/src/routing/gossip.rs b/lightning/src/routing/gossip.rs
index cb49ab81..e91d251f 100644
--- a/lightning/src/routing/gossip.rs
+++ b/lightning/src/routing/gossip.rs
@@ -38,6 +38,7 @@ use io_extras::{copy, sink};
use prelude::*;
use alloc::collections::{BTreeMap, btree_map::Entry as BtreeEntry};
use core::{cmp, fmt};
+use core::ops::Bound;
use sync::{RwLock, RwLockReadGuard};
use core::sync::atomic::{AtomicUsize, Ordering};
use sync::Mutex;
@@ -342,14 +343,9 @@ where C::Target: chain::Access, L::Target: Logger
fn get_next_node_announcement(&self, starting_point: Option<&PublicKey>) -> Option<NodeAnnouncement> {
let nodes = self.network_graph.nodes.read().unwrap();
- let mut iter = if let Some(pubkey) = starting_point {
- let mut iter = nodes.range(NodeId::from_pubkey(pubkey)..);
- iter.next();
- iter
- } else {
- nodes.range::<NodeId, _>(..)
- };
- for (_, ref node) in iter {
+ let starting_bound = starting_point.map_or(
+ Bound::Unbounded, |pubkey| Bound::Excluded(NodeId::from_pubkey(pubkey)));
+ for (_, ref node) in nodes.range((starting_bound, Bound::Unbounded)) {
if let Some(node_info) = node.announcement_info.as_ref() {
if let Some(msg) = node_info.announcement_message.clone() {
return Some(msg);
|
7970f3d
86dcb5b to
7970f3d
Compare
jkczyz
previously approved these changes
Aug 16, 2022
valentinewallace
previously approved these changes
Aug 16, 2022
8ffaacb
7970f3d to
8ffaacb
Compare
Collaborator
Author
|
Squashed without further changes. |
jkczyz
approved these changes
Aug 16, 2022
valentinewallace
approved these changes
Aug 16, 2022
1 task
valentinewallace
added a commit
to valentinewallace/rust-lightning
that referenced
this pull request
Aug 25, 2022
It's more accurate to name it as dropping gossip broadcasts, as it won't drop all gossip. Also fix accidental flipped bool introduced in lightningdevkit#1660
valentinewallace
added a commit
to valentinewallace/rust-lightning
that referenced
this pull request
Aug 25, 2022
Fixes a flipped bool that was introduced in lightningdevkit#1660
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is intended as a pre-req for #1604 with a few of the changes from #1604 pulled in. The commit description of the "main" commit follows, with a few other minor tweaks present as well.
Instead of backfilling gossip by buffering (up to) ten messages at
a time, only buffer one message at a time, as the peers' outbound
socket buffer drains. This moves the outbound backfill messages out
of
PeerHandlerand into the operating system buffer, where itarguably belongs.
Not buffering causes us to walk the gossip B-Trees somewhat more
often, but avoids allocating vecs for the responses. While its
probably (without having benchmarked it) a net performance loss, it
simplifies buffer tracking and leaves us with more room to play
with the buffer sizing constants as we add onion message forwarding
which is an important win.