Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SetAsync<TValue> not throwing when failed #34

Closed
sanllanta opened this issue Nov 25, 2021 · 18 comments
Closed

SetAsync<TValue> not throwing when failed #34

sanllanta opened this issue Nov 25, 2021 · 18 comments
Assignees
Labels
enhancement New feature or request

Comments

@sanllanta
Copy link

I'm using the library to store values in Redis. When saving data with the SetAsync method I've found that it doesn't fail the task when an exception occurs when saving the data. I would like to be able to recover from this scenario as I need to ensure that the data was saved successfully. Whats the best way of handling this scenario?, digging into the code I see that the RunAsyncActionAdvancedAsync in FusionCacheExecutionUtils has a reThrow attribute that would solve the issue, but it is always being passed as false with no way to override it. I believe I could simply fetch the value afterwards with TryGet but think it would be easier if there was a way to know if the Set method failed.

Thanks for your help. I'm really liking the library so far.

@jodydonetti
Copy link
Collaborator

jodydonetti commented Nov 25, 2021

Hi @sanllanta , thanks for using FusionCache!

Design-wise in FusionCache there's a difference between the primary (memory) and secondary (distributed) layers, and they are not treated in the same way (more on this here, just at the beginning).

Why?

This is because writes to the secondary layer should not interfere with the normal application flow, at least by default: applications consuming the cache should "think locally", knowing that the data will be propagated (if a distributed cache has been properly configured) in a kind of "best effort" way.
You can even enable background processing of these operations on the distributed cache, by setting FusionCacheEntryOptions.AllowBackgroundDistributedCacheOperations to true, to have an even better performance.

Now, regarding your observation about RunAsyncActionAdvancedAsync: you are in fact correct, and that is the case because of the reasoning I described above.

Example

In a typical scenario you get some data from a database and save it in in the cache to avoid executing a query every single time: if one write does not go well, we will simply try again the next time, no big deal.

This is just to give you an idea of how I designed it, I hope I've been able to explain my thought process.

Can you instead give me an example of why, in your use case, it can be problematic to just wait the next round to update the distributed cache? I'm curious to understand more.

But (because there's always a but 😄)

I have to say that the use case you described may make sense, even if I think that normally it is more common to use the distributed cache with a "best effort" approach, since it's secondary.

I have to think about it a little bit more, but here are 2 possible directions.

Idea 1

As mentioned, there's already the FusionCacheEntryOptions.AllowBackgroundDistributedCacheOperations option: what I may add is another option, something like FusionCacheEntryOptions.ReThrowDistributedCacheExceptions that, if set to true, would do what you are asking (and basically drive that bool reThrow param you mentioned).

Of course I should think (and document) about how it should behave with the other existing options.
An example is what if you allow background distributed cache operations? Then it would not make sense to rethrow, since the code has already moved on, right?
Another one is distributed cache timeouts (both soft and hard): if you set one of those, should I rethrow the timeout exception or, in that specific case, ignore it? Or should the timeouts and/or the background execution be automatically disabled if the new flag is set to true?

Another important point to consider is this:

  • you call a Set method
  • it saves the data in the memory cache
  • it tries to save the data in the distributed cache
  • but it's not able to do it, because the distributed cache throws an exception
  • now what about the data saved in the memory cache? The data is there, and the next Get that will be executed would find it "normally", but your code (that called the Set method) receives an exception, and it may seem counter intuitive to have the data saved in the memory cache

I mean it seems kinda easy at first, but we should think about side effects + edge cases.

Idea 2

The code that works with the distributed cache already has an integrated small circuit breaker, that deals with transient errors.

An idea may be to also add a little retry logic, with some configurable options like int MaxRetries and TimeSpan RetryDelay.
But I was hesitant about adding too much about that, and in fact I mentioned in the docs that you can use something more specific, like Polly (which is already used by some official .NET libs).
Why am I hesitant about that? Because what may seem "easy" like "retry a couple of times" can in fact hide a lot, like the concept of "exponential backoff" and other things like that.

Let me know

Anyway I'm open to some idea, so let me know what you think about my observations and we can work something out!

@jodydonetti jodydonetti self-assigned this Nov 25, 2021
@jodydonetti jodydonetti changed the title SetAsync<TValue> not trhowing when failed SetAsync<TValue> not throwing when failed Nov 25, 2021
@jodydonetti
Copy link
Collaborator

Thanks for your help. I'm really liking the library so far.

ps: I've been carried away but my looong answer, but thanks! I'm glad you are liking it!

@sanllanta
Copy link
Author

Thank you for the thorough answer. I think for the usage you describe the library does a great job. My current use case might be a bit of an outlier but here it is: I have a service that populates data through redis for consumption in a couple other services that gets refreshed once in a while. For this case, I have no use for the in memory layer (this is something I'd love to do in Fusion easily but understand it's not the primary use case, so I managed to work around by setting the max entries for the memory cache to 0). I considered using another library to publish directly to Redis, but in this case I'm forced to rely on Fusion since the services that read from Redis also use it, and it has a particular way of storing the data and I'm not sure how easy it is to replicate

@jasenf
Copy link

jasenf commented Nov 25, 2021

Can FusionCache treat redis as just a backplane for communicating invalidation across client nodes rather than storing the content?

@jodydonetti
Copy link
Collaborator

jodydonetti commented Nov 25, 2021

Hi @jasenf I've ramped up a lot the work on the backplane (see #11) these last couple of weeks, and it's basically done.
I'm finishing some stress testing, documentation and some details but it's basically ready for prime time.

I think I would be able to release it next week.

@jodydonetti
Copy link
Collaborator

Thanks @sanllanta for the explanation, makes sense.

So if I understand correctly, ideally you would be on the side of something like this #26 right?

@jasenf
Copy link

jasenf commented Nov 25, 2021

Thanks. No hurry, was just curious. Issue #11 didn't make any mention of redis ONLY serving as the mechanism for coordinating evictions. We don't want any secondary cache (with values stored in Redis). Is that the intent?

@sanllanta
Copy link
Author

Thank you @jodydonetti. That's one side of it yes, but I think I have it already covered with the configuration workaround. On the other hand, I think I like your Idea 1 for using a new config option FusionCacheEntryOptions.ReThrowDistributedCacheExceptions. Maybe it could also be set in a more granular way inside FusionCacheEntryOptions, so you could specify for which cases would you like the exceptions to bubble up.

@jodydonetti
Copy link
Collaborator

jodydonetti commented Nov 25, 2021

Issue #11 didn't make any mention of redis ONLY serving as the mechanism for coordinating evictions. We don't want any secondary cache (with values stored in Redis). Is that the intent?

Yep @jasenf ! You can decide to have only the memory cache + the backplane, and no distributed cache.

(Even though, I'd like to add, this situation suffers from a specific problem I have to try to overcome, not related to FusionCache or Redis, but with that approach itself. If you want I can explain it in more detail so you can tell me what you think about it)

@jodydonetti
Copy link
Collaborator

jodydonetti commented Nov 25, 2021

Maybe it could also be set in a more granular way inside FusionCacheEntryOptions, so you could specify for which cases would you like the exceptions to bubble up.

Yes @sanllanta 😉

@jodydonetti
Copy link
Collaborator

jodydonetti commented Dec 3, 2021

Hi @jasenf , I created #36 to brainstorm around the idea of memory + backplane without disributed cache.

I'd be happy if you let me know what you think.

@jodydonetti
Copy link
Collaborator

jodydonetti commented Jan 27, 2022

Hi all, I'm happy to say that I've finally been able to complete the design and implementation of the backplane feature.

And yes, you can disable automatic notifications per-operation for maximum control, on top of being able to manually send them whenever you want: this means you can have memory-only caches + backplane, without a distributed cache 😉🎉

Please take a look here, try it out and let me know what you think so I can move forward with the final version.

Thanks everybody 🙏

📦 It's a pre-release!

Note that the Nuget package is marked as pre-release so please be sure to enable the related filter otherwise you would not see them:

image

@jodydonetti
Copy link
Collaborator

Meanwhile I published an (hopefully) even better alpha2 release.

@jodydonetti
Copy link
Collaborator

Hi all, just wanted to update you on the next version: I released right now the BETA2 for the next big release, which includes among other small things a big fix for the DI setup part.

This will probably be the last release before the official one.

@jodydonetti
Copy link
Collaborator

Hi all, yesterday I released the BETA 3.

Unless some big problem comes up, this will be the very last release before the official one, which will be in the next few days 🎉

@jodydonetti jodydonetti added the enhancement New feature or request label Jun 29, 2022
@jodydonetti
Copy link
Collaborator

Hi @sanllanta , long time no talk 😄
I'm going over some issue in the backlog and I was wondering how things ended up for you: do you still need the change in behavior you asked for here?
To recap: the request was to have the ability to catch distributed cache exceptions in your own code.

Please let me know, thanks!

@sanllanta
Copy link
Author

Hi @jodydonetti! Thanks for reaching out. I'm currently doing OK without it, but I believe it could be a great addition so we can have a bit more control on how to handle cache exceptions

@jodydonetti
Copy link
Collaborator

Hi @sanllanta , I just released v0.11.0 which includes the new feature 🎉

Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants