Skip to content
This repository has been archived by the owner on Jan 15, 2024. It is now read-only.

[FEATURE] Add "Reduce LR On Plateau" scheduler #897

Open
wants to merge 2 commits into
base: v0.x
Choose a base branch
from

Conversation

haven-jeon
Copy link
Member

Description

The LR scheduler that reduce learning rate when a metric has stopped improving. #887

Modified code from https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau

Checklist

Essentials

  • [v] PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc)
  • [v] Changes are complete (i.e. I finished coding on this PR)
  • [v] All changes have test coverage
  • [v] Code is well-documented

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@haven-jeon haven-jeon requested a review from szha as a code owner August 24, 2019 11:33
@mli
Copy link
Member

mli commented Aug 24, 2019

Job PR-897/1 is complete.
Docs are uploaded to http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR-897/1/index.html

@szha szha requested review from szhengac and leezu August 27, 2019 00:27
else: # mode == 'max':
self.mode_worse = -np.Inf

self.is_better = partial(self._cmp, mode, threshold_mode, threshold)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you want to use partial? mode, threshold_mode, threshold are simply class variables, so you can simply use them by adding self. to the prefix.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I will fix.

def in_cooldown(self):
return self.cooldown_counter > 0

def _cmp(self, mode, threshold_mode, threshold, a, best):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current design does not look very scalable. That means it requires hard coding if we would like add some changes/schedules.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments.
I can consider more flexible class desine.

@szha
Copy link
Member

szha commented Sep 11, 2019

@haven-jeon gentle ping

@szha szha changed the base branch from master to v0.x August 13, 2020 02:18
@szha szha requested a review from a team as a code owner August 13, 2020 02:18
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants