Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vLLM Compatibility #5

Open
sidjha1 opened this issue Feb 3, 2024 · 1 comment
Open

vLLM Compatibility #5

sidjha1 opened this issue Feb 3, 2024 · 1 comment

Comments

@sidjha1
Copy link

sidjha1 commented Feb 3, 2024

Hello, I was curious whether it was possible to run models locally via vLLM. The README mentions HF TGI for running local models. Looking through the experimental dspy branch it seems that HF TGI is chosen for the model if an OpenAI model is not provided. Should I modify the experimental branch to add vLLM support or is there another way to run local models on vLLM?

@KarelDO
Copy link
Owner

KarelDO commented Feb 7, 2024

Ideally DSPy handles all of this, and IReRa just uses whatever LLM you supply. To run with vLLM for now, it is indeed best to change how the models are created in the irera branch on DSPy. I'd need to think of a more scalable way of taking care of model providers long-term.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants