Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add check for quantized model #913

Merged
merged 3 commits into from
Dec 4, 2023

Conversation

NanoCode012
Copy link
Collaborator

There was a case in discord when someone tried to train AWQ model using full finetune.

This adds check that when quantization_config is in the model's config to make sure gptq is set.

Inversely, it makes sure that when gptq: true, the quantization_config exists and GPTQ model.

@winglian
Copy link
Collaborator

winglian commented Dec 4, 2023

not sure why your new check is failing the e2e test. the model definitely has that set

https://huggingface.co/TheBlokeAI/jackfram_llama-68m-GPTQ/blob/main/config.json#L29

@NanoCode012
Copy link
Collaborator Author

Cool thanks!

@NanoCode012 NanoCode012 merged commit a581e9f into axolotl-ai-cloud:main Dec 4, 2023
4 checks passed
@NanoCode012 NanoCode012 deleted the feat/check_quantize branch December 4, 2023 16:20
mkeoliya pushed a commit to mkeoliya/axolotl that referenced this pull request Dec 15, 2023
* feat: add check for quantized model

* chore: refactor and add another check

* Update src/axolotl/utils/models.py

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
@Blaizzy Blaizzy mentioned this pull request Apr 10, 2024
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants