Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch Hub amp.autocast() inference #2641

Merged
merged 1 commit into from
Mar 28, 2021
Merged

Commits on Mar 28, 2021

  1. PyTorch Hub amp.autocast() inference

    I think this should help speed up CUDA inference, as currently models may be running in FP32 inference mode on CUDA devices unnecesarily.
    glenn-jocher committed Mar 28, 2021
    Configuration menu
    Copy the full SHA
    ce34cba View commit details
    Browse the repository at this point in the history