Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE-REQUEST] Switch to Pytorch 2.0, and use compiled models. #89

Open
AJDERS opened this issue Dec 5, 2022 · 1 comment
Open
Labels
enhancement New feature or request

Comments

@AJDERS
Copy link
Collaborator

AJDERS commented Dec 5, 2022

Pytorch 2.0 has been released, and promises backwards compatibility, and many performance enhancements.

As an example of the performance enhancements, an option to compile models is now available, resulting in a ~50% speed-up on huggingface models on an A100, and slightly less on non-server-class GPUs, with minimal code. The speed-ups are even large for AMP precision compared to float32.

Seems like simply doing:

compiled_model = torch.compile(model)

will lead to this speed-up.

@AJDERS AJDERS added the enhancement New feature or request label Dec 5, 2022
@AJDERS
Copy link
Collaborator Author

AJDERS commented Feb 24, 2023

This would constitute a major release, and is seen as nice to have.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant