Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bound max count of lookups #6015

Merged
merged 2 commits into from
Jul 12, 2024
Merged

Bound max count of lookups #6015

merged 2 commits into from
Jul 12, 2024

Conversation

dapplion
Copy link
Collaborator

Issue Addressed

Lookups contain untrusted data, including blocks that have not yet been validated. In case of bugs or malicious activity we want to bound how much memory these lookups can consume. Aprox the max size of a lookup is ~ 10 MB (current max size of gossip and RPC blocks). 200 lookups can take at most 2 GB. 200 lookups allow 3 parallel chains of depth 64 (current maximum).

Proposed Changes

  • Bound max count of lookups to 200
  • Move log line to after actually creating lookup

@dapplion dapplion added ready-for-review The code is ready for review Networking labels Jun 27, 2024
if self.single_block_lookups.len() > MAX_LOOKUPS {
warn!(self.log, "Dropping lookup reached max"; "block_root" => ?block_root);
return false;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think we can probably move this up, before we even create SingleBlockLookup?

/// Lookups contain untrusted data, including blocks that have not yet been validated. In case of
/// bugs or malicious activity we want to bound how much memory these lookups can consume. Aprox the
/// max size of a lookup is ~ 10 MB (current max size of gossip and RPC blocks). 200 lookups can
/// take at most 2 GB. 200 lookups allow 3 parallel chains of depth 64 (current maximum).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we decide which chains are the better chains to progress?

Right now we drop the parent chains when they reach certain depth here. In a highly forked network, we could have multiple lookup chains and reach 200 quite easily.

If we can't continue parent lookup for one of the chains because we've reach limit of 200, what do we do next?

  • is this parent chain now stuck, and will simply get dropped after 15 minutes? or do we create a lookup for this parent later when the size of all lookups drop to certain number?
  • or should we consider removing other chains (when we reach the 200 limit) that are potentially less useful or with less peers? (not sure how to determine this though)

Copy link
Collaborator Author

@dapplion dapplion Jul 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are valid concerns but should be addressed in another PR / issue. But in summary:

  • If there are two forks and only one is producing blocks frequently range sync will be triggered and we will recover from it.
  • If there are two forks both producing blocks frequently enough, we would not be able to sync this chain due to how range sync qualifies "Synced" peers. This is a known issue and should be taken care of in the future.

Whatever we do we can't allow peers to OOM us in any case. However we address the lookup -> range sync case, we have to bound the total count of lookups.

RE prioritization: Latest lookups to be created will fail. Lookup sync should not be in charge to sync large sections of blocks, that's the duty of range sync. In the case the large count of lookups is a result of malicious activity, the bound allows the existing lookups to be processed without killing lighthouse, then ban those peers and resume processing the rest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

@jimmygchen jimmygchen added ready-for-merge This PR is ready to merge. and removed ready-for-review The code is ready for review labels Jul 12, 2024
@jimmygchen
Copy link
Member

@mergify queue

Copy link

mergify bot commented Jul 12, 2024

queue

✅ The pull request has been merged automatically

The pull request has been merged automatically at 0c0b56d

mergify bot added a commit that referenced this pull request Jul 12, 2024
@mergify mergify bot merged commit 0c0b56d into sigp:unstable Jul 12, 2024
28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Networking ready-for-merge This PR is ready to merge.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants