Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trainer.on_gpu == True for gpus = 0 or gpus=[] #558

Closed
mpariente opened this issue Nov 28, 2019 · 1 comment · Fixed by #561
Closed

trainer.on_gpu == True for gpus = 0 or gpus=[] #558

mpariente opened this issue Nov 28, 2019 · 1 comment · Fixed by #561
Labels
bug Something isn't working

Comments

@mpariente
Copy link
Contributor

mpariente commented Nov 28, 2019

Describe the bug
trainer.on_gpu == True for gpus = 0 or gpus=[] because of this line.
When resuming from a checkpoint, the model is transfered to gpu at this line, even if gpus=0

Expected behavior
gpus=0 should make training happen on CPU.

I can make a PR for that, the fix is pretty simple, I think we can just replace
self.on_gpu = gpus is not None and torch.cuda.is_available()
by
self.on_gpu = True if (gpus and torch.cuda.is_available()) else False

Am I missing something?

@mpariente mpariente added the bug Something isn't working label Nov 28, 2019
@williamFalcon
Copy link
Contributor

williamFalcon commented Nov 28, 2019

Thanks for bringing this up. To use no gpus just set

gpus=None

Or don't pass any arguments to trainer about gpu

If you think that gpus=0 or gpus=[] should also run on no GPUs feel free to submit a PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants