Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In-memory backed RedisStore #20

Closed
bmorton opened this issue Feb 12, 2020 · 2 comments
Closed

In-memory backed RedisStore #20

bmorton opened this issue Feb 12, 2020 · 2 comments

Comments

@bmorton
Copy link
Contributor

bmorton commented Feb 12, 2020

It seems like the cache entries that are stored to Redis can be long lived and don't need to be invalidated (the hash will always resolve to the same query), so we are looking at using the Redis implementation but caching Redis lookups locally to avoid the network call when possible. It's essentially a combination of the in-memory adapter with the redis adapter.

I can think of three options for implementing this:

  1. Add this support to the RedisStore so a config option can be passed to allow local caching
  2. In my project, create a new LocalCachedRedisStore that takes a RedisStore and an in-memory store and does both
  3. Same as 2, but included with this repo instead of directly in my project

Do you have a preference here? Should I keep this to my project alone or would you accept a PR for something like this?

@DmitryTsepelev
Copy link
Owner

Sorry for not being super responsive 🙁So the idea is that this new adapter would make a Redis call for a first time, and than cache a response forever in memory, right?

I wonder what the measured benefit would be compared to Redis only configuration, but I don't mind having such a solution in the repo 🙂I think I'd prefer the third option to have an adapter that combines two stores to a single one—it will help to avoid giving too much power to the current Redis adapter.

@bmorton
Copy link
Contributor Author

bmorton commented Feb 18, 2020

The measured benefit is less about latency and more about resiliency. Reducing network calls seems like a responsible way to ensure we're not asking clients to send a second request more than they need to (which I suppose ultimately would result in increased client-perceived latency).

The reason we at Yammer are looking into APQ at all is in an effort to reduce overhead of the GraphQL layer, so we think this approach should help.

I've taken a swing at it in #23. With how modular this gem is, it was really easy to compose these objects together!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants