Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow concurrent calls to Engine #673

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Allow concurrent calls to Engine #673

wants to merge 1 commit into from

Conversation

mariusdv
Copy link

(First PR on this codebase)
Rewrote the way the global information is stored to allow concurrent requests to both Execute and Invoke methods.

This mainly uses AsyncLocals, which got introduced in net framework 4.6 (required a package upgrade)

Execution Context & Call Stacks
Previously, these structures were kept using a simple stack. when running multiple invocations, this stack would be poluted by the different calls. To separate the data for every call while maintaining the ability of a common state, I use AsyncLocals. These locals will remain scoped to a single call.
The structure is set up in a way to allow concurrent steps within a single invocation as well if we ever want to run some things in parallel.

StrictModeScope and EvalCodeScope
Structure used to be based on a disposable pattern that modified the same static state. This similarly starts to be poluted during concurrent requests. Pattern changed from disposables to using delegates. change in usage is minimal, whereas the scoping is now provided correctly.

Others
To make this scenario work, I also changed some dictionaries to concurrent dictionaries and altered the global cache objects so they use the ConcurrentObjectPool instead of the ObjectPool.

@mariusdv mariusdv changed the title Allow concurrent calls to engine Allow concurrent calls to Engine Oct 16, 2019
@lahma
Copy link
Collaborator

lahma commented Oct 17, 2019

Awesome work here! I'm a bit concerned about he performance. Currently Jint is highly optimized to work as single threaded engine and you should pool instances outside of Jint logic. Where comes the need to concurrently access the same engine and data structures? Historically JavaScript has been ran using single-threaded model which has its benefits and short comes.

@mariusdv
Copy link
Author

mariusdv commented Oct 17, 2019

For me the main use case would be users wanting to encorporate this lib in a webservice.

Currently I would see 2 options for this usecase:

  • load the functions used (especially taxing when part of a sizable lib) each time you receive a request
  • create a pool of engines and manage the resources to prevent concurrent access.

The first one does not only impact the response times, it will be using a lot more memory than required due to the stored duplicate information, which is directly correlated to the number of requests.

Pooling would prevent the memory allocation, but decreases performence due to the limiting of concurrent processed calls when the pool hits its limit.

As to the hit in performance, I would have to run the benchmark :)

@ayende
Copy link
Contributor

ayende commented Oct 17, 2019

Given that a lot of that is related to the need to parse / state.
Can we do things differently? Have a readonly scope of some sort?
So I can create an engine, load some functions / scripts, then get that immutable state and reuse it across engines?

@mariusdv
Copy link
Author

I quite like that approach as well 👍

@lahma
Copy link
Collaborator

lahma commented Oct 17, 2019

I think that the problem is that even with concurrency support you would suffer from shared or dirty state. Pre-parsed programs (esprima) should give you quite fast evaluation even if you would build engine per request. There was job done earlier to make engine creation faster and of course we want all execution to be fast. Single-threaded should be quite fast and locks would mean slower execution. Would be nice to see some benchmarks to show the use case of sharing these engine instance.

@sebastienros
Copy link
Owner

I also agree there must be a way to be able to reuse "hot" engines that are somehow immutable, probably with some higher level prototype, or at least intermediate. Maybe based on a dedicated mode where the default objects are immutable (global, Error, Math, ...) and only the user context would be released. So even if an engine is not thread-safe, it could at least be pooled, and cleaned cheaply. Like with a "Freeze" method where at that point any existing state can't be changed, and only the user code one can be.

Then we could even image making a freeze after the standard object are initialized, and provide an automatic engine pool, which would improve all the common scenarios without any custom configuration. We know that creating an Engine incurs lots of C# code by default already (every Constructor object, all the lambdas, ...).

@rulehero
Copy link

Just throwing out an alternative idea I've been toying around with. You could put the instance of the engine in a Grain using Microsoft Orleans. It should handle the pooling/concurrency issues for you and provide another layer of state sharing/management within the Grain if you need it.

https://github.com/dotnet/orleans
https://dotnet.github.io/orleans/Documentation/grains/reentrancy.html

Out of curiosity, does anyone have any metrics around the cost of creating an Engine vs reusing one?

@Ghostbird
Copy link

Previously, these structures were kept using a simple stack. when running multiple invocations, this stack would be poluted by the different calls.

This would explain #704 that we've run into. @sebastienros I'd love to see this merged, because I think it would enable our use of deferred calls too. In our case we use Jint in a not-ultra-light way, and a small performance hit would be an acceptable trade-off in my opinion.

@mariusdv I assume it would be hard to make this an Engine option?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants