Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't load a local model with llama.cpp "repo id must be a string" "model path does not exist" #834

Closed
julian-passebecq opened this issue Apr 24, 2024 · 2 comments
Labels
bug llama.cpp Related to the `llama.cpp` integration

Comments

@julian-passebecq
Copy link

Describe the issue as clearly as possible:

i can't load a local model with llama cpp following outlines documentation. The model load well, so llm is charging my local model but the script has an error when using "model".

I tried different syntax nothing is working, also putting the model in the same directory and using llm = Llama("./mistral-7b-instruct-v0.2.Q5_K_M.gguf") model = models.llamacpp(llm) , the model is in the same repertory but i have error
Message=Model path does not exist: ./mistral-7b-instruct-v0.2.Q5_K_M.gguf
Source=D:\LLM\outlines\outlines2.py
StackTrace:
File "D:\LLM\outlines\outlines2.py", line 5, in (Current frame)
llm = Llama("./mistral-7b-instruct-v0.2.Q5_K_M.gguf")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Model path does not exist: ./mistral-7b-instruct-v0.2.Q5_K_M.gguf

Steps/code to reproduce the bug:

from outlines import models
from llama_cpp import Llama
import os
import json


MODEL_DIR = r"D:\LLM\models\mistral"
model_file = "mistral-7b-instruct-v0.2.Q5_K_M.gguf"
model_path = os.path.join(MODEL_DIR, model_file)

llm= Llama(model_path, n_gpu_layers=33, n_ctx=3584, n_batch=521, verbose=True)
model = models.llamacpp(llm)

Expected result:

model loading with model

Error message:

Repo id must be a string, not <class 'llama_cpp.llama.Llama'>: '<llama_cpp.llama.Llama object at 0x000001A779383470>'.

Outlines/Python version information:

latest version

Context for the issue:

image
image
image
image

@rlouf rlouf added the llama.cpp Related to the `llama.cpp` integration label Apr 24, 2024
@rlouf
Copy link
Member

rlouf commented Apr 24, 2024

Thank you for opening an issue. I am sorry, but the documentation was incorrect. The following code should work:

from outlines import models
from llama_cpp import Llama
import os
import json


MODEL_DIR = r"D:\LLM\models\mistral"
model_file = "mistral-7b-instruct-v0.2.Q5_K_M.gguf"
model_path = os.path.join(MODEL_DIR, model_file)

llm= Llama(model_path, n_gpu_layers=33, n_ctx=3584, n_batch=521, verbose=True)
model = models.LlamaCpp(llm)

I just updated the documentation in #835

@julian-passebecq
Copy link
Author

Thanks so much for quick answer, i confirm that your code works well :) Good continuation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug llama.cpp Related to the `llama.cpp` integration
Projects
None yet
Development

No branches or pull requests

2 participants