Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory increases significantly during inference #196

Open
xpq-tech opened this issue Jun 3, 2024 · 0 comments
Open

Memory increases significantly during inference #196

xpq-tech opened this issue Jun 3, 2024 · 0 comments

Comments

@xpq-tech
Copy link

xpq-tech commented Jun 3, 2024

We used AWQ to quantize a model with the same architecture as LLaMA2. After quantization, the VRAM usage during loading was only 6567M, but the VRAM usage reached 32223M when generating up to 500 tokens during inference. Is this characteristic inherent to AWQ, or is there an error in our implementation? Looking forward to your response.

  • After Load
    image
  • After Inference (toknes = 500)
    image
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant