Unwanted accumulate_grad_batches behavior #1549
Labels
bug
Something isn't working
feature
Is an improvement or enhancement
good first issue
Good for newcomers
help wanted
Open to be worked on
Milestone
🐛 Bug
When using the flag
accumulate_grad_batches
for the trainer, if an action is to be performed at the last mini-batch it isn't done.To Reproduce
Steps to reproduce the behavior:
on_after_backward
, define some logic if we are at the last mini-batchCode sample
See above.
Expected behavior
I manually compute a gradient penalty that is applied only at the last mini-batch of an epoch. Using the flag shouldn't break this behavior.
Environment
PyTorch version: 1.6.0.dev20200403
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: CentOS Linux release 7.7.1908 (Core)
GCC version: (Homebrew GCC 5.5.0_7) 5.5.0
CMake version: version 3.13.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration:
GPU 0: Tesla P100-SXM2-16GB
GPU 1: Tesla P100-SXM2-16GB
GPU 2: Tesla P100-SXM2-16GB
GPU 3: Tesla P100-SXM2-16GB
Nvidia driver version: 440.33.01
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.18.1
[conda] blas 1.0 mkl
[conda] kmeans-pytorch 0.2 pypi_0 pypi
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.6.0.dev20200403 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch-nightly
[conda] pytorch-lightning 0.7.3 pypi_0 pypi
[conda] pytorch-memlab 0.0.4 pypi_0 pypi
[conda] pytorch-pcen 0.0.1 pypi_0 pypi
[conda] torchvision 0.6.0.dev20200403 py37_cu101 pytorch-nightly
Additional context
It isn't a bug per se, but it should at least be a documented behavior, ideally controllable with a flag.
The text was updated successfully, but these errors were encountered: