Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify openai model to receive openai client as argument during initialization #593

Merged
merged 2 commits into from
Mar 7, 2024

Conversation

RobinPicard
Copy link
Contributor

PR to solve issue #563

I have an issue with testing though. The fact that there's an object named openai in the openai module creates an import conflict when trying to use with patch('outlines.models.openai.generate_chat') as mocked_generate_chat. I see 2 solutions:

  • Rename/remove the openai object (see comment in the linked issue)
  • Use something else than unittest.mock.patch to mock functions in that module. Maybe there's another way with the unittest package I'm unaware of. Otherwise I've used the package expectise in the past and find it quite convenient, but that implies adding another dependency to outlines

@rlouf
Copy link
Member

rlouf commented Jan 28, 2024

Can't we remove AzureOpenAI and OpenAICompatibleAPI if we pass the client?

@RobinPicard
Copy link
Contributor Author

Can't we remove AzureOpenAI and OpenAICompatibleAPI if we pass the client?

There are small differences:

  • AzureOpenAI currently requires to have both a deployment_name and a model_name. The deployment_name is actually the equivalent of the model_name for the OpenAI client (it's what ends up in the OpenAIConfig) while the model_name is used for the tokenizer property
  • OpenAICompatibleAPI currently requires an encoding argument used for the tokenizer property and we remove the check on the model_name for it

I think we could fit all of them in a single class with some optional parameters and conditions based on them in the initialization. I'm not sure it would be simpler to use though

@RobinPicard
Copy link
Contributor Author

I've merged the 3 into a single model/class, let me know if you think it's better like that

@lapp0
Copy link
Collaborator

lapp0 commented Jan 29, 2024

Please add openai to the test list under [project.optional-dependencies] in pyproject.toml

@lapp0
Copy link
Collaborator

lapp0 commented Feb 5, 2024

Please rebase, main has a fix for your test failure.

@RobinPicard RobinPicard force-pushed the modify_openai_model branch 2 times, most recently from 2d8f284 to c261f47 Compare February 5, 2024 22:23
@RobinPicard
Copy link
Contributor Author

Please rebase, main has a fix for your test failure.

Done, but I still have the issue of the conflict between the name of the module and the name of the openai object for the mocking of the generate_chat function in the tests

@lapp0
Copy link
Collaborator

lapp0 commented Feb 6, 2024

Ya the import structure is tricky, we might consider doing import openai as _openai_module in outlines/models/__init__.py to accommodate patching.

@rlouf
Copy link
Member

rlouf commented Feb 6, 2024

Looks good! I can take a look at the import issue. We will also need to update the documentation, this is a big interface change.

outlines/models/openai.py Outdated Show resolved Hide resolved
outlines/models/openai.py Outdated Show resolved Hide resolved
outlines/models/openai.py Outdated Show resolved Hide resolved
@RobinPicard
Copy link
Contributor Author

I've made a few modifications based on your comments to simplify the model and have fewer elements specific to OpenAI vs. Azure vs. OpenAICompatible. Sorry for the back and forth

@rlouf
Copy link
Member

rlouf commented Mar 4, 2024

I thought a little more about this, and given the changes I made in #727 I suggest the following interface:

  1. The function openai initializes an AsyncOpenAI client. It takes as arguments everything that pertains to the client: model_name, api_key, max_retries, timeout.
  2. We can have another azure_openai alias that also returns an AsyncOpenAI client.
  3. The OpenAI class is initialized with an AsyncOpenAI (or compatible) client and a tokenizer.

There are a few details that need to be fleshed out, but I think this solves many issues, like the impossibility to use choice with 3-rd party APIs.

@rlouf rlouf force-pushed the modify_openai_model branch 5 times, most recently from f488ba1 to 1d12afe Compare March 5, 2024 19:55
@rlouf
Copy link
Member

rlouf commented Mar 5, 2024

@RobinPicard I implemented what I suggested above and fixed the issue with mocking. Could you please review?

Copy link
Contributor Author

@RobinPicard RobinPicard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The interface makes sense to me!

outlines/models/openai.py Outdated Show resolved Hide resolved
outlines/models/openai.py Outdated Show resolved Hide resolved
@RobinPicard RobinPicard requested a review from rlouf March 7, 2024 07:23
@rlouf rlouf merged commit 9fd0f46 into outlines-dev:main Mar 7, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants