Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google Gemini: LLM requests are not being cached #315

Open
jwmatthews opened this issue Aug 22, 2024 · 0 comments
Open

Google Gemini: LLM requests are not being cached #315

jwmatthews opened this issue Aug 22, 2024 · 0 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@jwmatthews
Copy link
Member

I am testing #307 and confirmed that while I see successful requests being sent back to run_demo.py, I am not seeing data being written to disk for cached responses.

I ran with: DEMO_MODE being set to True and also with the default of it being set to False.

Note I also saw an odd message in logs about a grpc warning #313 unsure if that has any relation.

@jwmatthews jwmatthews added bug Something isn't working help wanted Extra attention is needed labels Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant