Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Does any convert.py supprt llama3? #7737

Closed
zyc1128 opened this issue Jun 4, 2024 · 7 comments
Closed

Bug: Does any convert.py supprt llama3? #7737

zyc1128 opened this issue Jun 4, 2024 · 7 comments
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

Comments

@zyc1128
Copy link

zyc1128 commented Jun 4, 2024

What happened?

Does any convert.py supprt llama3?

Name and Version

Does any convert.py supprt llama3?

What operating system are you seeing the problem on?

No response

Relevant log output

No response

@zyc1128 zyc1128 added bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) labels Jun 4, 2024
@christianazinn
Copy link
Contributor

You'll want to use convert-hf-to-gguf.py. The Llama3 BPE pretokenizer is supported by default in convert-hf-to-gguf-update.py.

@ego
Copy link

ego commented Jun 4, 2024

How to convert origin Meta-Llama-3 to gguf?

❯ ls models/Meta-Llama-3-8B
checklist.chk       consolidated.00.pth params.json         tokenizer.model

@Galunid
Copy link
Collaborator

Galunid commented Jun 4, 2024

Convert the .pth weights yourself using https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py, or download the weights from huggingface. Then you can use convert-hf-to-gguf.py

@Galunid Galunid closed this as completed Jun 4, 2024
@ego
Copy link

ego commented Jun 4, 2024

Convert the .pth weights yourself using https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py, or download the weights from huggingface. Then you can use convert-hf-to-gguf.py

I have tried it:

python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir ../llama.cpp/models/Meta-Llama-3-8B --output_dir converted

but it produce error:

.venv/lib/python3.10/site-packages/sentencepiece/__init__.py", line 316, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from ../llama.cpp/models/Meta-Llama-3-8B/tokenizer.model

@Galunid
Copy link
Collaborator

Galunid commented Jun 4, 2024

See huggingface/transformers#30334, this is not our script and sadly we can't provide support for it.

@ego
Copy link

ego commented Jun 4, 2024

Thanks a lot!

@ArthurZucker
Copy link

If you update to the latest version of transformers the scripts supports the conversion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
Projects
None yet
Development

No branches or pull requests

5 participants