Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for torch.cuda.amp #162

Closed
quickgrid opened this issue Nov 5, 2020 · 2 comments
Closed

Add support for torch.cuda.amp #162

quickgrid opened this issue Nov 5, 2020 · 2 comments

Comments

@quickgrid
Copy link

Apex installation command pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ failed for me on windows, so I installed it from, https://anaconda.org/conda-forge/nvidia-apex. After using flag --fp16 True training was far slower than fp16 disabled. I tested it on pytorch 1.6 and stylegan2-pytorch 1.2.6.

I came across this issue which mentions torch.cuda.amp fixes many issues from apex.amp and has many more advantages including windows support. Please add support if possible.

@lucidrains
Copy link
Owner

@quickgrid Last I checked, amp still has some issues in the DDP setting, so I'm holding off until those issues are resolved

Also, when I tried amp, i didn't get any speed or memory improvements on top of using apex

@quickgrid
Copy link
Author

Closing issue as amp should mostly speedup newer Nvidia GPU's with compute capability 7.0+. I have also read other posts and articles that pytorch amp will not provide much benefit with mixed precision on Nvidia 10 series GPU's as they lack tensor cores.

This link is for tensorflow mixed precision, but same should apply for pytorch, https://www.tensorflow.org/guide/mixed_precision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants