Skip to content

Commit

Permalink
Use consistent markdown formatting for the AdamW paper (#2722)
Browse files Browse the repository at this point in the history
  • Loading branch information
ronshapiro authored Aug 8, 2022
1 parent 66a81f9 commit 930dacf
Showing 1 changed file with 6 additions and 8 deletions.
14 changes: 6 additions & 8 deletions tensorflow_addons/optimizers/weight_decay_optimizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
class DecoupledWeightDecayExtension:
"""This class allows to extend optimizers with decoupled weight decay.
It implements the decoupled weight decay described by Loshchilov & Hutter
It implements the decoupled weight decay described by [Loshchilov & Hutter]
(https://arxiv.org/pdf/1711.05101.pdf), in which the weight decay is
decoupled from the optimization steps w.r.t. to the loss function.
For SGD variants, this simplifies hyperparameter search since it decouples
Expand Down Expand Up @@ -343,7 +343,7 @@ class OptimizerWithDecoupledWeightDecay(
This class computes the update step of `base_optimizer` and
additionally decays the variable with the weight decay being
decoupled from the optimization steps w.r.t. to the loss
function, as described by Loshchilov & Hutter
function, as described by [Loshchilov & Hutter]
(https://arxiv.org/pdf/1711.05101.pdf). For SGD variants, this
simplifies hyperparameter search since it decouples the settings
of weight decay and learning rate. For adaptive gradient
Expand All @@ -367,9 +367,8 @@ class SGDW(DecoupledWeightDecayExtension, tf.keras.optimizers.SGD):
"""Optimizer that implements the Momentum algorithm with weight_decay.
This is an implementation of the SGDW optimizer described in "Decoupled
Weight Decay Regularization" by Loshchilov & Hutter
(https://arxiv.org/abs/1711.05101)
([pdf])(https://arxiv.org/pdf/1711.05101.pdf).
Weight Decay Regularization" by [Loshchilov & Hutter]
(https://arxiv.org/pdf/1711.05101.pdf).
It computes the update step of `tf.keras.optimizers.SGD` and additionally
decays the variable. Note that this is different from adding
L2 regularization on the variables to the loss. Decoupling the weight decay
Expand Down Expand Up @@ -447,9 +446,8 @@ class AdamW(DecoupledWeightDecayExtension, tf.keras.optimizers.Adam):
"""Optimizer that implements the Adam algorithm with weight decay.
This is an implementation of the AdamW optimizer described in "Decoupled
Weight Decay Regularization" by Loshch ilov & Hutter
(https://arxiv.org/abs/1711.05101)
([pdf])(https://arxiv.org/pdf/1711.05101.pdf).
Weight Decay Regularization" by [Loshchilov & Hutter]
(https://arxiv.org/pdf/1711.05101.pdf).
It computes the update step of `tf.keras.optimizers.Adam` and additionally
decays the variable. Note that this is different from adding L2
Expand Down

0 comments on commit 930dacf

Please sign in to comment.