Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gossipsub v1.1: prune peer exchange #234

Merged
merged 28 commits into from
Mar 24, 2020
Merged

gossipsub v1.1: prune peer exchange #234

merged 28 commits into from
Mar 24, 2020

Conversation

vyzo
Copy link
Collaborator

@vyzo vyzo commented Nov 21, 2019

Adds support for peer exchange on prune; see #233
Depends on:

TBD:

  • Act on peer exchange when pruned
  • tests

@vyzo vyzo requested review from Stebalien and raulk November 21, 2019 22:17
@vyzo
Copy link
Collaborator Author

vyzo commented Nov 21, 2019

cc @whyrusleeping @ZenGround0

@vyzo vyzo requested a review from Kubuxu November 21, 2019 22:18
@vyzo vyzo added req:filecoin P0 Critical: Tackled by core team ASAP labels Nov 21, 2019
@vyzo vyzo changed the title [WIP] gossipsub v1.1: prune peer exchange gossipsub v1.1: prune peer exchange Nov 23, 2019
@vyzo
Copy link
Collaborator Author

vyzo commented Nov 23, 2019

Summoning @Stebalien @raulk @Kubuxu; this is ready for review.

Copy link
Member

@Kubuxu Kubuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

initial quick pass

gossipsub.go Show resolved Hide resolved
gossipsub.go Outdated Show resolved Hide resolved
Copy link
Contributor

@aschmahmann aschmahmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good to me, left a few comments to address.

Also, noting for posterity that this PR should resolve libp2p/specs#215 🎉

gossipsub.go Outdated Show resolved Hide resolved
gossipsub_test.go Show resolved Hide resolved
gossipsub.go Outdated Show resolved Hide resolved
@@ -239,6 +264,90 @@ func (gs *GossipSubRouter) handlePrune(p peer.ID, ctl *pb.ControlMessage) {
gs.tracer.Prune(p, topic)
delete(peers, p)
gs.untagPeer(p, topic)
gs.addBackoff(p, topic)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 great!

However, we're not currently doing anything about a peer continuously chasing away other peers by sending a GRAFT. If during a GRAFT we could check if we're already over the limit and send pack a PRUNE that would largely resolve this case.

Copy link
Collaborator Author

@vyzo vyzo Nov 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to do that because we will reject all new peers and they may not be able to form a connected mesh.
That's why it accepts the peer and resolves (with randomization) during the heartbeat.

Copy link
Collaborator Author

@vyzo vyzo Nov 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be clear, sending a PRUNE if we are over the limit is the wrong thing to do, because then the mesh can become full and fail to accept new peers.
(I considered it when designing the algorithm)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get that it used to be that way, but is that still true now that we have peer exchange?

If I send back a PRUNE only when I have >Dhi peers and I prune (Dhi-D) peers and tell them about each other won't they always be able to join the mesh?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They probably will be able to do that, but I'd rather reshuffle the mesh in its entirety instead of rejecting new GRAFTs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think about a fully connected mesh where all peers are at D_hi -- (unlikely as it is) it won't accept new peers then, while GRAFTing and reshuffling will resolve the situation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, so IIUC this is really a timing problem right? Because if B is connected to C and then A tries to connect to B it's possible that when B PRUNEs A and C that A will send a GRAFT to C before C receives the PRUNE from B.

My concern with the current approach is that if A wants to GRAFT to B it can always keep spamming B and eventually it will get through, even though B has given A plenty of other peers to connect to. This is annoying since if a peer is "better" in some way (e.g. they are the primary publisher) then nodes might be selfish, however it's certainly not a deal-breaker and wasn't even resolvable until the Peer Exchange changes.

Since fixing this would likely require some protocol thought and is less important than the rest of the episub work, seems reasonable to push this further down the road. Would you like me to create an issue?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be an attack -- I don't think it can happen naturally.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, we can create an issue to discuss further instead of it being lost in oblivion after the pr merges.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a recent win here! The decisions of which peers to retain and which to eject is now performed by evaluating the peer's score! 🎉

pb/rpc.proto Outdated Show resolved Hide resolved
Copy link
Contributor

@aschmahmann aschmahmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM but I want to make sure we don't introduce a DoS vector.

gs.addBackoff(p, topic)
px := prune.GetPeers()
if len(px) > 0 {
gs.pxConnect(px)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have two concerns:

  1. What if a single peer sends a bunch of prunes? Could they cause us to launch a ton of goroutines (and crash)?
  2. Can't a peer use this to trick us into connecting to a ton of (potentially non-useful) peers? We should only try connecting to the number of peers we actually need (one?).

Ideally, we'd stick these peers in a set of known peers for the topic, connecting to them as-needed whenever we're pruned or we disconnect from a peer.

Copy link
Collaborator Author

@vyzo vyzo Dec 5, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. We must be already grafted, we ignore prunes that don't correspond to a peer on our mesh. So a peer can't make us launch an arbitrary number of goroutines by sending us prunes; at most he can make us launch a single goroutine for each topic we belong to its mesh. Not much of a vector I think.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. We could have a (legitimate) prune listing an arbitrary number of useless peers; we could limit the number of peers we connect to.

Copy link
Collaborator Author

@vyzo vyzo Dec 5, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For 2 I added a check that limits the number of connections to at most GossipSubPrunePeers, so this should address the concern.
We do want more than 1 peer in general, to expand our potential peers as much as possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. SGTM.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I slept on it and there is a DoS vector: A malicious peer could send us in sequence GRAFT/PRUNE and cause us to spawn a goroutine; it could simply be sitting there sending GRAFT/PRUNE ad infinum, causing us to spawn a goroutine for each pair.
Granted, the effect is limited in how many goroutines it can fit inside the 30s window for the connection timeout, but it's still nasty.
I will rewrite the connection logic to use a limited set of pending connections and goroutine-free scheduling.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented a connection scheduler, which limits the number of max pending connections too.

@@ -441,16 +553,21 @@ func (gs *GossipSubRouter) heartbeat() {
tograft := make(map[peer.ID][]string)
toprune := make(map[peer.ID][]string)

// clean up expired backoffs
gs.clearBackoff()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably fine, but we should monitor this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is, walking through each of these every heartbeat should be fine, but could get expensive in some edge cases.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do it every few heartbeats instead of every heartbeat.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every heartbeat is probably fine, I'm just calling it out so we keep it in mind.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A way more efficient manner would be to use a ring buffer of peer slices, with nslots = BackoffPeriod/HeartbeatInterval. Then you track the tick count (monotonically increasing by 1), every heartbeat increments it by 1. And when clearing the backoff, you simply do currentTick mod nslots, and simply clear that slot. That's pretty efficient. The data for backoff would turn into map[string][][]peer.ID.

@vyzo vyzo requested a review from Stebalien December 5, 2019 16:53
@vyzo vyzo force-pushed the feat/prune-px branch 2 times, most recently from acba90e to a09bca2 Compare December 5, 2019 17:22
gossipsub.go Show resolved Hide resolved
Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works but can't we just have a buffered channel (selecting with default when writing to it)?

Technically, this works slightly better (de duplicates, etc.) but I'm not sure if the complexity is worth it.

gossipsub.go Outdated Show resolved Hide resolved
gossipsub.go Outdated Show resolved Hide resolved
@vyzo
Copy link
Collaborator Author

vyzo commented Dec 6, 2019

Actually you are right; we don't need this complexity, we can just use a sufficient buffer in the connect channel.
I'll reset the last commit and do it with the channel.

Copy link
Member

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty great. Approving to avoid stalling any longer, but I'd really appreciate these two items being addressed before merging (see comments):

  1. Backoff data structure for more efficient clearing.
  2. Buffered channel for connect attempts, and less connector goroutines.

Thanks, @vyzo!

We need to enhance the specs ASAP.

gossipsub.go Show resolved Hide resolved
p := peer.ID(pi.PeerID)

_, connected := gs.peers[p]
if connected {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we'll connect to PEX peers but we'll wait until the next heartbeat to rebalance the mesh. That's why we can safely skip topic members that we're already connected to, because we'll anyway consider them in the next heartbeat (as long as we remain connected to them). I think that's correct.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup, do you want a comment here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally ;-)

@@ -441,16 +553,21 @@ func (gs *GossipSubRouter) heartbeat() {
tograft := make(map[peer.ID][]string)
toprune := make(map[peer.ID][]string)

// clean up expired backoffs
gs.clearBackoff()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A way more efficient manner would be to use a ring buffer of peer slices, with nslots = BackoffPeriod/HeartbeatInterval. Then you track the tick count (monotonically increasing by 1), every heartbeat increments it by 1. And when clearing the backoff, you simply do currentTick mod nslots, and simply clear that slot. That's pretty efficient. The data for backoff would turn into map[string][][]peer.ID.

GossipSubPruneBackoff = time.Minute

// number of active connection attempts for peers obtained through px
GossipSubConnectors = 16
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this too much? 16 conn attempts at once, i.e. we'll try to connect to all peers recommended by the pruning one (as this value is equal to GossipSubPrunePeers.

I'd prefer if connect was a buffered channel, and we'd have less concurrent goroutines. Right now it's a bit hit or miss, e.g. if we got pruned from two topics at once, the connector goroutines will be saturated with the first batch, and all peers from the second batch will be dropped.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a buffered channel! But yes, we can certainly lower from 16, how about 8 or 4?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Max inflight dials FD limit is 160 by default. Assuming each peer has an average of 3 addresses, 16 connectors could make use 30% of our FD allowance. I'd scale this down to 8 by default, but I'd add an option so the app can increase/decrease it (it may also touch the swarm limits!).

Also, we will need some fairness heuristic here. If we're pruned from two or three topics at once, the first topic will get all slots, and the other two will be queued. Instead, we might want to balance peers from topics 1, 2, and 3 for quicker healing. Some form of priority queue could work well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the hardening branch reduces this to 8.

Copy link
Member

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to merge the pending PRs in go-libp2p-core and go-libp2p-peerstore before we merge this one. Those commit hashes in go.mod are nasty.

@vyzo
Copy link
Collaborator Author

vyzo commented Dec 28, 2019

Re: backoff data structure

I am not entirely convinced it is more efficient with the circular buffer, as we'd have to iterate through the entire ring to find if a peer is being backed off.
What we can do instead, which will improve efficiency, is to only clear backoffs once every few ticks (say every 10-15 ticks for the default 1min backoff).
How does that sound?

@vyzo
Copy link
Collaborator Author

vyzo commented Mar 24, 2020

rebased on master

Copy link
Collaborator Author

@vyzo vyzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright, ready to merge.

GossipSubPruneBackoff = time.Minute

// number of active connection attempts for peers obtained through px
GossipSubConnectors = 16
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the hardening branch reduces this to 8.

@vyzo vyzo merged commit 7cafd84 into master Mar 24, 2020
@vyzo vyzo deleted the feat/prune-px branch April 14, 2020 09:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P0 Critical: Tackled by core team ASAP req:filecoin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants