-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow user to select individual TPU core to train on #1729
Merged
williamFalcon
merged 29 commits into
Lightning-AI:master
from
lezwon:feature/1539_tpu_train_parallel
May 17, 2020
Merged
Changes from 13 commits
Commits
Show all changes
29 commits
Select commit
Hold shift + click to select a range
8995fbc
added tpu_id
bd9e88c
train on individual tpu
1daadfa
parallel loader if tpu_id is None
e4d49d0
removed progress_bar_refresh_rate
0ed38cd
chlog
Borda 725ef5d
replaced num_tpu_cores with tpu_cores
c0a4f9d
set tpu_id to None if int
f25d516
changed num_tpu_cores to tpu_cores in docs
a93c6bc
Merge branch 'master' into feature/1539_tpu_train_parallel
lezwon b22f485
updated docs
cdda262
Merge branch 'master' into feature/1539_tpu_train_parallel
lezwon 0669ad2
updated __init__.py
2253b9f
Update pytorch_lightning/trainer/__init__.py
Borda 67c5688
check if tpu_cores is a list
lezwon ec278d1
xla device conditional
100071b
num_tpu_cores deprecation
8adb0a9
removed duplicate warning
34f2209
Merge remote-tracking branch 'official/master' into feature/1539_tpu_…
f779d01
fixed pep8 error
dafe174
Revert "removed duplicate warning"
4c6958e
deprecated api update
5c0db30
fixed recursion error
c7a9b4e
fixed tests
83e5d99
fixed flake errors
230831e
Merge remote-tracking branch 'official/master' into feature/1539_tpu_…
59e0b49
removed current_tpu_index
f22d90d
Merge branch 'master' into feature/1539_tpu_train_parallel
williamFalcon 940f70b
Update CHANGELOG.md
Borda ec300ee
Update trainer.py
Borda File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this now makes it ONLY possible to train on 1 core no? not multiple cores
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so... @lezwon ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have noticed that if
self.tpu_id
isNone
and I usexmp.spawn
, the model trains at the same speed it trains when all cores are being used. So I assumed that all cores are being used. I could add some logging to confirm. Or just add a conditional forxm.xla_device()
maybe?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ONLY when the user requests a specific TPU index should we use
model.to(xm.xla_device(self.tpu_id))
otherwise, leave it as it was.@Borda we need TPU tests to make sure this PR doesn't break functionality