Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix docstrings on torch/optim and torch/autograd #113218

Closed

Conversation

alperenunlu
Copy link
Contributor

Fixes #112593

Description

Fixes the docstrings on following files.

pydocstyle path-to-file --count
File Count
torch/optim/lr_scheduler.py 96 -> 39
torch/optim/nadam.py 6 -> 4
torch/optim/optimizer.py 31 -> 6
torch/optim/radam.py 6 -> 4
torch/optim/rmsprop.py 7 -> 4
torch/optim/rprop.py 6 -> 4
torch/optim/sgd.py 6 -> 4
torch/optim/swa_utils.py 15 -> 10
torch/optim/_multi_tensor/init.py 2 -> 1
torch/autograd/init.py 8 -> 0
torch/autograd/anomaly_mode.py 8 -> 7

Copy link

pytorch-bot bot commented Nov 8, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/113218

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit a1f039b with merge base 9f71452 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link

linux-foundation-easycla bot commented Nov 8, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

@albanD albanD added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module docathon-h2-2023 Issues for the docathon in H2 2023 labels Nov 8, 2023
@albanD albanD requested review from svekars and removed request for albanD November 8, 2023 19:25
Comment on lines +449 to +451
"""Decays the learning rate of each parameter group by a small constant factor.

Until the number of epoch reaches a pre-defined milestone: total_iters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Decays the learning rate of each parameter group by a small constant factor.
Until the number of epoch reaches a pre-defined milestone: total_iters.
"""Decays the learning rate of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: total_iters.

Comment on lines +508 to +510
"""Decays the learning rate of each parameter group by linearly changing small multiplicative factor.

Until the number of epoch reaches a pre-defined milestone: total_iters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Decays the learning rate of each parameter group by linearly changing small multiplicative factor.
Until the number of epoch reaches a pre-defined milestone: total_iters.
"""Decays the learning rate of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: total_iters.

optimization process and milestone points that provides exact intervals to reflect
which scheduler is supposed to be called at a given epoch.
"""
Receives the list of schedulers that is expected to be called sequentially during optimization process.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Receives the list of schedulers that is expected to be called sequentially during optimization process.
Receives the list of schedulers that is expected to be called sequentially during the optimization process.

@alperenunlu
Copy link
Contributor Author

@svekars these suggested lines are too long for flake8 should I add them to the pr?

@svekars svekars added the medium Label for medium docathon tasks label Nov 9, 2023
r"""Register a load_state_dict pre-hook which will be called before
:meth:`~torch.optim.Optimizer.load_state_dict` is called. It should have the
following signature::
r"""Register a load_state_dict pre-hook which will be called.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems odd. Can you revert this change?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You want to have the first sentence as:

Register a load_state_dict pre-hook which will be called before :meth:~torch.optim.Optimizer.load_state_dict `is called

(with the proper quoting)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pydocstyle wants a summary line and linter wants the line shorter than 120 char.

Register a load_state_dict pre-hook which will be called before :meth:~torch.optim.Optimizer.load_state_dict is called

still too long for linter to accept because of the indentation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I think we should err on the of being more informative rather than appeasing the linter though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soulitzer I also felt like it became odd. But I couldn't come up with another solution that could pass by both pydocstyle and flake8.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is probably fine to leave it as it was before. That way flake8 passes, and we don't regress the meaning of the first sentence.

torch/optim/optimizer.py Outdated Show resolved Hide resolved
torch/optim/optimizer.py Outdated Show resolved Hide resolved
Copy link
Contributor

@soulitzer soulitzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix, I added some suggestions (mainly where I think more context in the first sentence is important to understanding).

@alperenunlu
Copy link
Contributor Author

Thank you @soulitzer!!

@alperenunlu
Copy link
Contributor Author

@svekars Can you also look at soulitzer's review?

@soulitzer
Copy link
Contributor

The autograd parts look good to me, but maybe @janeyx99 should take a closer look at the optim changes.

@@ -187,8 +185,9 @@ def __exit__(self, type, value, traceback):


class LambdaLR(LRScheduler):
"""Sets the learning rate of each parameter group to the initial lr
times a given function. When last_epoch=-1, sets initial lr as lr.
"""Sets the learning rate of each parameter group to the initial lr times a given function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Sets the learning rate of each parameter group to the initial lr times a given function.
"""Set the learning rate of each parameter group to the initial lr times a given function.

Should this be singularized following the pattern everywhere else?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Classes don't necessarily have to start with an imperative verb.
Therefore, I didn't modify the class docstrings to make them imperative.

in the specified function. When last_epoch=-1, sets initial lr as lr.
"""Multiply the learning rate of each parameter group by the factor given in the specified function.

When last_epoch=-1, sets initial lr as lr.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When last_epoch=-1, sets initial lr as lr.
When last_epoch=-1, set the initial lr as lr.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't touch them since they are in a class and not the summary line.

times a given function. When last_epoch=-1, sets initial lr as lr.
"""Sets the learning rate of each parameter group to the initial lr times a given function.

When last_epoch=-1, sets initial lr as lr.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When last_epoch=-1, sets initial lr as lr.
When last_epoch=-1, set initial lr as lr.

here too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe not, since this is a class..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also didn't touch them because they are in a class and not the summary line.

Copy link
Contributor

@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delayed review, I've recently returned.

Left my review--most edits are good but a few are wrong. Thank you for going through the docs!

torch/optim/lr_scheduler.py Outdated Show resolved Hide resolved
torch/optim/lr_scheduler.py Outdated Show resolved Hide resolved
torch/optim/lr_scheduler.py Outdated Show resolved Hide resolved
torch/optim/lr_scheduler.py Outdated Show resolved Hide resolved
torch/optim/lr_scheduler.py Outdated Show resolved Hide resolved
torch/optim/optimizer.py Outdated Show resolved Hide resolved
torch/optim/optimizer.py Outdated Show resolved Hide resolved
torch/optim/optimizer.py Outdated Show resolved Hide resolved
torch/optim/optimizer.py Outdated Show resolved Hide resolved
alperenunlu and others added 8 commits November 22, 2023 02:36
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
@alperenunlu
Copy link
Contributor Author

Thank you for your review @janeyx99!!

Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Jan 21, 2024
@janeyx99 janeyx99 removed the Stale label Feb 5, 2024
Copy link
Contributor

@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So sorry I did not get back to you earlier!

@janeyx99
Copy link
Contributor

janeyx99 commented Feb 5, 2024

If you're willing to manually rebase and bring this PR up to date, please ping me and we can try to get this in this week.

Copy link

github-actions bot commented Apr 5, 2024

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Apr 5, 2024
@github-actions github-actions bot closed this May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docathon-h2-2023 Issues for the docathon in H2 2023 medium Label for medium docathon tasks open source release notes: optim Stale triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
6 participants