Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate toma for automatic batch sizing #1444

Closed
williamFalcon opened this issue Apr 10, 2020 · 5 comments · Fixed by #1949
Closed

Integrate toma for automatic batch sizing #1444

williamFalcon opened this issue Apr 10, 2020 · 5 comments · Fixed by #1949
Labels
feature Is an improvement or enhancement help wanted Open to be worked on

Comments

@williamFalcon
Copy link
Contributor

@BlackHC want to integrate into lightning?

https://github.com/BlackHC/toma

@williamFalcon williamFalcon added feature Is an improvement or enhancement help wanted Open to be worked on labels Apr 10, 2020
@SkafteNicki
Copy link
Member

Since we have our own implementation of this now, I am closing this.

@BlackHC
Copy link
Contributor

BlackHC commented May 25, 2020

Thanks for this PR to implement toma in PyTorch Lightning!

If you copy code and ideas from my projects, could you please add a mention from it, too? I see that you're a fellow PhD student, so you are aware of the importance of credit assignment.

In particular, if you copy code verbatim and remove helpful comments... maybe add them back.

def is_cuda_out_of_memory(exception):
    return (
        isinstance(exception, RuntimeError) and len(exception.args) == 1 and "CUDA out of memory." in exception.args[0]
    )


def is_cudnn_snafu(exception):
    # For/because of https://github.com/pytorch/pytorch/issues/4107
    return (
        isinstance(exception, RuntimeError)
        and len(exception.args) == 1
        and "cuDNN error: CUDNN_STATUS_NOT_SUPPORTED." in exception.args[0]
    )
def gc_cuda():
    """Gargage collect Torch (CUDA) memory."""
    gc.collect()
    if torch.cuda.is_available():
        torch.cuda.empty_cache()

is from https://github.com/BlackHC/toma/blob/master/toma/torch_cuda_memory.py.

Now, I think this PR contains lots of other code, and I think it's great, but maybe add a link or a mention.

Thank you,
Andreas

PS: Keeping my rather random method names is a bit of a give-away.
PSS: Also commented on the PR.

@williamFalcon
Copy link
Contributor Author

williamFalcon commented May 25, 2020

@BlackHC i'm sorry, i had no idea this code was copied... @SkafteNicki generally we like to do our own implementations of features. I'm sure it wasn't done with bad intentions, so we just need to find something that works here? Maybe we forgot to add this to our contribution guidelines (@Borda mind updating?)

I suggest a few things to rectify this:

we use toma as is and add them as a dependency for this feature
or
we come up with our own actual implementation.
But seeing how the code was copied from the toma repo i would rather play nice and bring them in as an actual dependency.

@BlackHC my deepest apologies, i was not aware that this code came from your repo!

@PyTorchLightning/core-contributors thoughts?

@Borda
Copy link
Member

Borda commented May 25, 2020

@BlackHC I am sorry for this, I was not aware of it...
I would add it as a dependency, there is no need to develop wheel again unless we can get a better wheel :]
@SkafteNicki or @BlackHC (to be also a mentioned contributor in next release) mind send a PR with importing mentioned util functions from your lib?

@Borda Borda mentioned this issue May 25, 2020
@BlackHC
Copy link
Contributor

BlackHC commented May 26, 2020

I want to thank everyone for replying so quickly.

For future reference, most messages are in #1638.

Thank you all!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants