-
Notifications
You must be signed in to change notification settings - Fork 22.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug in MultiStepLR lr scheduler #31828
Comments
you should use |
Did anyone try this? |
If you are compiling from master, please make sure to have the latest. Schedulers no longer take the
|
Thank you for your response! |
Not all schedulers support that parameter in the first place. Moreover, we made schedulers chainable, and the epoch parameter doesn't extend nicely. See also #26423
For this, you can run the the scheduler over a loop, or you can save the state and load. |
this bug is a duplicate of #33229 |
🐛 Bug
Adding
epoch
argument tostep()
function of MultiStepLR lead to false learning rate.To Reproduce
Output
Expected behavior
Environment
PyTorch version: 1.4.0a0+d5bf51b
Is debug build: No
CUDA used to build PyTorch: 9.0
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.14.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: TITAN Xp
GPU 1: TITAN Xp
GPU 2: TITAN Xp
GPU 3: TITAN Xp
Nvidia driver version: 430.26
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.3
[pip] torch==1.4.0a0+d5bf51b
[conda] blas 1.0 mkl
[conda] magma-cuda90 2.5.0 1 pytorch
[conda] mkl 2019.4 243
[conda] mkl-include 2019.4 243
[conda] mkl-service 2.3.0 py36he904b0f_0
[conda] mkl_fft 1.0.15 py36ha843d7b_0
[conda] mkl_random 1.1.0 py36hd6b4f25_0
[conda] torch 1.4.0a0+d5bf51b pypi_0 pypi
Additional context
Possible cause might be that the milestones of MultiStepLR is a
counter
rather then a list, which leads to false action ofbisect
inget_lr
function.cc @vincentqb
The text was updated successfully, but these errors were encountered: