Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[server] Update OpenAI Model Support #1300

Merged
merged 5 commits into from
Oct 9, 2023
Merged

[server] Update OpenAI Model Support #1300

merged 5 commits into from
Oct 9, 2023

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Oct 6, 2023

Summary

  • This PR is focused on updating the openai integration such that it is more independent of the deepsparse server, leveraging the server refactor
  • This PR allows multiple text generation models to hosted using the single /v1/chat/completions endpoint, adds the /v1/models endpoint, updates the base routes, and also adds a separate openai specific workflow command

Examples and testing

  • On the command line, the following command can be used:
deepsparse.openai sample_config.yaml 

Here, sample_config.yaml has the following structure:

num_cores: 2
num_workers: 2
endpoints:
  - task: text_generation
    model: zoo:nlg/text_generation/opt-1.3b/pytorch/huggingface/opt_pretrain/pruned50_quantW8A8-none
  • This launches a FASTAPI app with the following endpoints:
Screenshot 2023-10-06 at 12 51 05 PM

Use the openai api to send requests:

import openai


openai.api_key = "EMPTY"
openai.api_base = "http://localhost:5543/v1"

# Completion API
stream = False
completion = openai.ChatCompletion.create(
    messages="how are you?",
    stream=stream,
    max_tokens=30,
    model="zoo:nlg/text_generation/opt-1.3b/pytorch/huggingface/opt_pretrain/pruned50_quantW8A8-none",
)

print("Chat results:")
if stream:
    text = ""
    for c in completion:
        print(c)
else:
    print(completion)
  • The user can provide the model they want to run inference on as part of the payload. If that model was not provided when launching the server, an error will be returned. We will be adding functionality to allow the user to add additional models, after the server has already been launched.

@dsikka dsikka marked this pull request as ready for review October 6, 2023 16:47
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice! Just two nits

src/deepsparse/server/openai_server.py Outdated Show resolved Hide resolved
src/deepsparse/server/openai_server.py Outdated Show resolved Hide resolved
@dsikka dsikka merged commit 45f199e into openai_support Oct 9, 2023
@dsikka dsikka deleted the update_openai branch October 9, 2023 15:25
dsikka added a commit that referenced this pull request Oct 10, 2023
* refactor server for different integrations; additional functionality for chat completion streaming and non streaming

* further refactor server

* add support such that openai can host multiple models

* update all tests

* fix output for n > 1

* add inline comment explaining ProxyPipeline

* [server] Update OpenAI Model Support (#1300)

* update server

* allow users to send requests with new models

* use v1; move around baseroutes

* add openai path

* PR comments

* clean-up output classes to be dataclasses, add docstrings, cleanup generation kwargs
dsikka added a commit that referenced this pull request Oct 11, 2023
* update/clean-up server to match mlserver docs

* update server tests

* add back ping

* [server] Refactor + OpenAI Chat Completion Support (#1288)

* refactor server for different integrations; additional functionality for chat completion streaming and non streaming

* further refactor server

* add support such that openai can host multiple models

* update all tests

* fix output for n > 1

* add inline comment explaining ProxyPipeline

* [server] Update OpenAI Model Support (#1300)

* update server

* allow users to send requests with new models

* use v1; move around baseroutes

* add openai path

* PR comments

* clean-up output classes to be dataclasses, add docstrings, cleanup generation kwargs

* update readme, update route cleaning, update docstring

* fix README for QA
dsikka added a commit that referenced this pull request Oct 11, 2023
* update/clean-up server to match mlserver docs

* update server tests

* add back ping

* [server] Refactor + OpenAI Chat Completion Support (#1288)

* refactor server for different integrations; additional functionality for chat completion streaming and non streaming

* further refactor server

* add support such that openai can host multiple models

* update all tests

* fix output for n > 1

* add inline comment explaining ProxyPipeline

* [server] Update OpenAI Model Support (#1300)

* update server

* allow users to send requests with new models

* use v1; move around baseroutes

* add openai path

* PR comments

* clean-up output classes to be dataclasses, add docstrings, cleanup generation kwargs

* update readme, update route cleaning, update docstring

* fix README for QA

* add openai doc

* update docs

* Update src/deepsparse/server/openai.md

Co-authored-by: Domenic Barbuzzi <domenic@neuralmagic.com>

---------

Co-authored-by: Domenic Barbuzzi <domenic@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants