Skip to content

Commit

Permalink
Docs updates for stateful layers (#146)
Browse files Browse the repository at this point in the history
  • Loading branch information
jatinchowdhury18 authored Sep 27, 2024
1 parent f9c2c64 commit 2ca066e
Show file tree
Hide file tree
Showing 6 changed files with 60 additions and 0 deletions.
10 changes: 10 additions & 0 deletions RTNeural/gru/gru.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,11 @@ namespace RTNEURAL_NAMESPACE
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's GRU implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, typename MathsProvider = DefaultMathsProvider>
class GRULayer final : public Layer<T>
Expand Down Expand Up @@ -143,6 +148,11 @@ class GRULayer final : public Layer<T>
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's GRU implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, int in_sizet, int out_sizet,
SampleRateCorrectionMode sampleRateCorr = SampleRateCorrectionMode::None,
Expand Down
10 changes: 10 additions & 0 deletions RTNeural/gru/gru_eigen.h
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ namespace RTNEURAL_NAMESPACE
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's GRU implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, typename MathsProvider = DefaultMathsProvider>
class GRULayer : public Layer<T>
Expand Down Expand Up @@ -152,6 +157,11 @@ class GRULayer : public Layer<T>
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's GRU implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, int in_sizet, int out_sizet,
SampleRateCorrectionMode sampleRateCorr = SampleRateCorrectionMode::None,
Expand Down
10 changes: 10 additions & 0 deletions RTNeural/gru/gru_xsimd.h
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ namespace RTNEURAL_NAMESPACE
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's GRU implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, typename MathsProvider = DefaultMathsProvider>
class GRULayer : public Layer<T>
Expand Down Expand Up @@ -157,6 +162,11 @@ class GRULayer : public Layer<T>
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's GRU implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, int in_sizet, int out_sizet,
SampleRateCorrectionMode sampleRateCorr = SampleRateCorrectionMode::None,
Expand Down
10 changes: 10 additions & 0 deletions RTNeural/lstm/lstm.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ namespace RTNEURAL_NAMESPACE
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's LSTM implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, typename MathsProvider = DefaultMathsProvider>
class LSTMLayer final : public Layer<T>
Expand Down Expand Up @@ -116,6 +121,11 @@ class LSTMLayer final : public Layer<T>
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's LSTM implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, int in_sizet, int out_sizet,
SampleRateCorrectionMode sampleRateCorr = SampleRateCorrectionMode::None,
Expand Down
10 changes: 10 additions & 0 deletions RTNeural/lstm/lstm_eigen.h
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ namespace RTNEURAL_NAMESPACE
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's LSTM implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, typename MathsProvider = DefaultMathsProvider>
class LSTMLayer : public Layer<T>
Expand Down Expand Up @@ -110,6 +115,11 @@ class LSTMLayer : public Layer<T>
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's LSTM implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, int in_sizet, int out_sizet,
SampleRateCorrectionMode sampleRateCorr = SampleRateCorrectionMode::None,
Expand Down
10 changes: 10 additions & 0 deletions RTNeural/lstm/lstm_xsimd.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@ namespace RTNEURAL_NAMESPACE
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's LSTM implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, typename MathsProvider = DefaultMathsProvider>
class LSTMLayer : public Layer<T>
Expand Down Expand Up @@ -131,6 +136,11 @@ class LSTMLayer : public Layer<T>
* To ensure that the recurrent state is initialized to zero,
* please make sure to call `reset()` before your first call to
* the `forward()` method.
*
* Compared to TensorFlow's LSTM implementation, this layer will
* behave by default as if the parameter `stateful=True`. A "stateless"
* GRU can be achieved by calling the `reset()` function in between
* calls to `forward()`.
*/
template <typename T, int in_sizet, int out_sizet,
SampleRateCorrectionMode sampleRateCorr = SampleRateCorrectionMode::None,
Expand Down

0 comments on commit 2ca066e

Please sign in to comment.