Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copy batch for local forward #532

Merged
merged 1 commit into from
Nov 23, 2019

Conversation

tullie
Copy link
Contributor

@tullie tullie commented Nov 20, 2019

When truncated_bptt > 1 and using a single GPU without dp/ddp. There's a bug where the batch split isn't freed after training_step. This PR solves the issue by only passing in a copy of the batch to training_step, which is probably how it's done within the internals of dp/dpp.

Copy link
Member

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how this fix a bug... pleas add some description of the bug...
this copy may prevent interference between batch changes from multiple points, but is it really the problem?

@tullie
Copy link
Contributor Author

tullie commented Nov 20, 2019

So the issue is when using truncated_bptt > 1, each batch is split into segments of truncated_bptt size (all stored on CPU). These segments are enumerated, moved to GPU and then passed to training_step. The problem is, after training_step returns they're not released (because the segment list is holding a reference) and so the GPU memory keeps accumulating until the end of the batch.

By creating a batch copy with a local reference, moving that to the gpu and then passing it training_step. The batch copy goes out of scope after training_step and the GPU memory is automatically released before moving onto the next segment.

Another solution might be to explicitly move the reference back to CPU after training_step but i'm not convinced that's better than this PR.

@williamFalcon
Copy link
Contributor

good catch. let’s avoid gpu cpu transfers because they’re very expensive

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants