-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swaps in ConcurrentLru for LruCache. #2978
Conversation
The current LruCache uses locks for thread synchronization, which can be quite slow. This change makes it so we use a concurrent LRU instead. I chose to just wrap the existing LruCache to minimize the changes in this PR. In a future PR, it would be reasonable to just use `ConcurrentLru` directly, though that would require adding it as a dependency to a lot of projects. Note: I don't know Nethermind's view on third party libraries. The referenced library is open source and MIT so we could just copy it over rather than referencing it via NuGet, but that is marginally more work and there is a good chance this change doesn't get accepted so I chose to do less work initially. Note: The LruCache `name` constructor parameter was never used, so I have removed it as part of this PR since I was touching all of the constructor calls anyway due to removal of the `startCapacity`.
Note: I did this because I mistakenly believed there was a race condition that may be leading to data corruption. Eventually I realized that wasn't the case but I already had done most of the leg work of this PR so I figured I would follow through with it since it may provide a performance improvement still. |
MIT is fine. Can you provide some benchmark to Benchmark solution that show's and proves the performance difference? |
What exactly would you like to see in a performance test? One could contrive a performance test that makes this new solution look great, and one could contrive a performance test that makes this solution look like no-change or a step backward. 😄 Do you all have any existing performance tests that could be run that are somewhat representative of real-world scenarios Nethermind encounters? It is worth noting, that in some situations a plain dictionary with a lock is faster than a As for off-the-shelf benchmarks, you can check out the benchmarks done by the project which include against a "ClassicLru" implementation, which is an optimized version of the LRU Nethermind currently uses: https://github.com/bitfaster/BitFaster.Caching#concurrentlru-throughput |
Depends on #2995 |
I think this can actually be closed? IIRC, you indicated someone else had a PR that did this way better (which I'm totally happy with). |
@MicahZoltu we are reviving our benchmarks, when its done we can properly check this change. Lets wait for it. |
We need to add some benchmarks before accepting this |
The current LruCache uses locks for thread synchronization, which can be quite slow. This change makes it so we use a concurrent LRU instead.
I chose to just wrap the existing LruCache to minimize the changes in this PR. In a future PR, it would be reasonable to just use
ConcurrentLru
directly, though that would require adding it as a dependency to a lot of projects.Note: I don't know Nethermind's view on third party libraries. The referenced library is open source and MIT so we could just copy it over rather than referencing it via NuGet, but that is marginally more work and there is a good chance this change doesn't get accepted so I chose to do less work initially.
Note: The LruCache
name
constructor parameter was never used, so I have removed it as part of this PR since I was touching all of the constructor calls anyway due to removal of thestartCapacity
.