Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async-openai 0.18 support #56

Open
ChristopherJMiller opened this issue Jan 5, 2024 · 1 comment
Open

async-openai 0.18 support #56

ChristopherJMiller opened this issue Jan 5, 2024 · 1 comment

Comments

@ChristopherJMiller
Copy link

OpenAI's spec introduced some new capabilities that led to some larger changes in the async-openai crate. I'm more than happy to contribute this work to tiktoken-rs but I wanted to open the dialogue first about one of the larger structural changes:

The biggest thing (imo) is the structure of ChatCompletionRequestMessage was changed to support a different data type per role.

This makes getting message content less trivial, but seemed like a necessity since OpenAI now supports user messages including images when invoking certain models, so this will require a data structure change. I'm not very familiar with this space outside of general application, so looking for input in terms of how these new user messages should be handled in terms of token counting.

@Dreaming-Codes
Copy link
Contributor

This discussion was already started in #50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants