Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add integration for AutoGPTQ and AutoAWQ #343

Merged
merged 3 commits into from
Nov 24, 2023
Merged

Add integration for AutoGPTQ and AutoAWQ #343

merged 3 commits into from
Nov 24, 2023

Conversation

rlouf
Copy link
Member

@rlouf rlouf commented Nov 9, 2023

Closes #245. Closes #270

@rlouf rlouf force-pushed the autogptq-integration branch 2 times, most recently from a3be6bb to 5e91f77 Compare November 24, 2023 14:42
@rlouf rlouf changed the title Add integration for ctransformers, AutoGPTQ and AutoAWQ Add integration for AutoGPTQ and AutoAWQ Nov 24, 2023
@rlouf
Copy link
Member Author

rlouf commented Nov 24, 2023

Not integrating ctransformers because of issues initializing the model: marella/ctransformers#154

@rlouf rlouf marked this pull request as ready for review November 24, 2023 15:37
@rlouf rlouf force-pushed the autogptq-integration branch 2 times, most recently from 1b4898d to 049c29d Compare November 24, 2023 15:41
@rlouf rlouf merged commit a117d9d into main Nov 24, 2023
4 checks passed
@rlouf rlouf deleted the autogptq-integration branch November 24, 2023 15:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: GGML/GGUF support using quantized models from llama
1 participant