Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize Validator Pipeline #103

Closed
vyzo opened this issue Sep 6, 2018 · 2 comments · Fixed by #176
Closed

Optimize Validator Pipeline #103

vyzo opened this issue Sep 6, 2018 · 2 comments · Fixed by #176

Comments

@vyzo
Copy link
Collaborator

vyzo commented Sep 6, 2018

Our validation pipeline is asynchronous: it spawns a goroutine for each incoming message, which may then get throttled based on topic and global limit throttles.
This creates a performance problem for message signature validation (#97) as we now have to enter the validation pipeline for every message when using signing (planned to become the default in ipfs and filecoin).

Proposed action:

  • pre-spawn NUMCPU goroutines to handle the validation front-end.
  • all signatures should be validated in the validation front-end goroutines
  • create a distinction between synchronous and asynchronous validators (which may block and be otherwise slow due to network effects), such that:
    • synchronous validators are applied in the validation front-end
    • asynchronous validators are handled in a freshly spawned goroutine, limited by the throttles.
@aarshkshah1992
Copy link
Contributor

@vyzo I would like to work on this. Is this up for grabs ? Have read the go-libp2p pubsub code and am looking for ways to contribute. This looks like a great first task !

@vyzo
Copy link
Collaborator Author

vyzo commented Jan 5, 2019

this is a relatively tricky issue to tackle for starting project -- and we are also not sure yet whether it is a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants