Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch only inference + any-device quantization #319

Merged
merged 5 commits into from
Jan 24, 2024

Conversation

casper-hansen
Copy link
Owner

@casper-hansen casper-hansen commented Jan 23, 2024

This will allow any device to run AWQ models, although much slower. Realistically, should only be used for testing or local development.

Quantization is tested to work on CUDA/MPS/CPU devices.

@casper-hansen casper-hansen changed the title Torch only inference Torch only inference + any-device quantization Jan 24, 2024
@casper-hansen casper-hansen merged commit c6c7b06 into main Jan 24, 2024
@casper-hansen casper-hansen deleted the torch_unpack_inference branch January 26, 2024 16:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant