Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The OpenAI completions endpoint calls will crash when the stop tokens are specified as a list. This is the same issue with the chat endpoint when the chat messages were a non-hashable type. The solution for the chat endpoint was to stringify the kwargs so that the cache could hash the kwargs. Then the string would be parsed back into JSON.
This PR implements the same logic for the completions endpoints.
There are two potential drawbacks I see here:
(1) This will invalidate the cache of anyone who still uses the completions endpoints in DSPy
(2) The completions endpoints are no longer supported by OpenAI and are being phased out, so maybe we should instead focus on removing support from DSPy.
A potentially alternative solution would be to only stringify the kwargs if there is an unhashable type in the kwargs. This would be backwards compatible with old caches, but with the drawback that this check for hashable types would incur a delay on each call.