Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unwanted accumulate_grad_batches behavior #1549

Closed
sebastienwood opened this issue Apr 21, 2020 · 2 comments · Fixed by #2853
Closed

Unwanted accumulate_grad_batches behavior #1549

sebastienwood opened this issue Apr 21, 2020 · 2 comments · Fixed by #2853
Labels
bug Something isn't working feature Is an improvement or enhancement good first issue Good for newcomers help wanted Open to be worked on
Milestone

Comments

@sebastienwood
Copy link

🐛 Bug

When using the flag accumulate_grad_batches for the trainer, if an action is to be performed at the last mini-batch it isn't done.

To Reproduce

Steps to reproduce the behavior:

  1. In on_after_backward, define some logic if we are at the last mini-batch
if self.__nbbatch -1 <= self.__batchidx:
 some_param.grad += gradient_penalty
  1. Run with and without the flag
  2. If running with the flag, the gradient penalty has not effect (the optimizer probably didn't take a step for the last mini-batch)

Code sample

See above.

Expected behavior

I manually compute a gradient penalty that is applied only at the last mini-batch of an epoch. Using the flag shouldn't break this behavior.

Environment

PyTorch version: 1.6.0.dev20200403
Is debug build: No
CUDA used to build PyTorch: 10.1

OS: CentOS Linux release 7.7.1908 (Core)
GCC version: (Homebrew GCC 5.5.0_7) 5.5.0
CMake version: version 3.13.0

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration:
GPU 0: Tesla P100-SXM2-16GB
GPU 1: Tesla P100-SXM2-16GB
GPU 2: Tesla P100-SXM2-16GB
GPU 3: Tesla P100-SXM2-16GB

Nvidia driver version: 440.33.01
cuDNN version: Could not collect

Versions of relevant libraries:
[pip3] numpy==1.18.1
[conda] blas 1.0 mkl
[conda] kmeans-pytorch 0.2 pypi_0 pypi
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.6.0.dev20200403 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch-nightly
[conda] pytorch-lightning 0.7.3 pypi_0 pypi
[conda] pytorch-memlab 0.0.4 pypi_0 pypi
[conda] pytorch-pcen 0.0.1 pypi_0 pypi
[conda] torchvision 0.6.0.dev20200403 py37_cu101 pytorch-nightly

Additional context

It isn't a bug per se, but it should at least be a documented behavior, ideally controllable with a flag.

@sebastienwood sebastienwood added bug Something isn't working help wanted Open to be worked on labels Apr 21, 2020
@stale
Copy link

stale bot commented Jun 20, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Jun 20, 2020
@williamFalcon williamFalcon added feature Is an improvement or enhancement and removed won't fix This will not be worked on labels Jun 26, 2020
@williamFalcon
Copy link
Contributor

@sebastienwood thanks for bringing this up! we're looking at it for next sprint

@edenlightning edenlightning added this to the 0.9.0 milestone Jul 29, 2020
@Borda Borda added the good first issue Good for newcomers label Aug 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working feature Is an improvement or enhancement good first issue Good for newcomers help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants