Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable quantization of tied embeddings #1703

Merged
merged 1 commit into from
Aug 14, 2023

Conversation

eldarkurtic
Copy link
Contributor

Tied embeddings are usually implemented in a way that they subclass torch.nn.Embedding and then implement the custom parts to handle using of the same matrix for embeddings and for the LM head. This PR enables detecting these modules during quantization as their type is not Embedding, but rather something custom defined by the LLM implementation. However, they are all subclasses of torch.nn.Embedding which we can utilize to detect them.

Before this PR, quantization modifier couldn't find tied embeddings modules as their type is not torch.nn.Embedding.

@bfineran bfineran merged commit bb5021b into neuralmagic:main Aug 14, 2023
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants