diff --git a/CHANGELOG.md b/CHANGELOG.md index ae83d17db07f..90621965c0d6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -70,6 +70,8 @@ To release a new version, please update the changelog as followed: ## [Unreleased] ### Added +- New Neural Type System documentation. Also added decorator to generate docs for input/output ports. +([PR #370](https://github.com/NVIDIA/NeMo/pull/370)) - @okuchaiev - New Neural Type System and its tests. ([PR #307](https://github.com/NVIDIA/NeMo/pull/307)) - @okuchaiev - Named tensors tuple module's output for graph construction. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 3b3c46f5dcae..f13030fb2251 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -4,9 +4,9 @@ 2) Make sure you sign your commits. E.g. use ``git commit -s`` when commiting -3) Make sure all unittests finish successfully before sending PR +3) Make sure all unittests finish successfully before sending PR ``python -m unittest`` from NeMo's root folder -4) Send your Pull Request to `master` branch +4) Send your Pull Request to the `master` branch # Collection Guidelines @@ -28,9 +28,8 @@ Please note that CI needs to pass for all the modules and collections. 1. **Sensible**: code should make sense. If you think a piece of code might be confusing, write comments. ## Python style -We follow [PEP 8 style guide](https://www.python.org/dev/peps/pep-0008/) and we incorporate [pycodestyle](https://pypi.org/project/pycodestyle/) into our CI pipeline to check for style. Make sure that your code passes PEP 8 before creating a Pull Request. - -There are several tools to automatically format your code to be PEP 8 compliant, such as [autopep8](https://github.com/hhatto/autopep8). Your text editor might support its own auto PEP 8 plugin. +We use ``black`` as our style guide. To check whether your code will pass style check (from the NeMo's repo folder) run: +``python setup.py style`` and if it does not pass run ``python setup.py style --fix``. 1. Avoid wild import: ``from X import *`` unless in ``X.py``, ``__all__`` is defined. 1. Minimize the use of ``**kwargs``. @@ -47,7 +46,10 @@ There are several tools to automatically format your code to be PEP 8 compliant, 1. If a comment lasts multiple lines, use ``'''`` instead of ``#``. ## Nemo style -1. If you import a module from the same collection, use relative path instead of absolute path. For example, inside ``nemo_nlp``, use ``.utils`` instead of ``nemo_nelp.utils``. +1. Use absolute paths. 1. Before accessing something, always make sure that it exists. 1. Right inheritance. For example, if a module doesn't have any trainable weights, don't inherit from TrainableNM. 1. Naming consistency, both within NeMo and between NeMo and external literature. E.g. use the name ``logits`` for ``log_probs``, ``hidden_size`` for ``d_model``. +1. Make an effort to use the right Neural Types when designing your neural modules. If a type you need does not + exists - you can introduce one. See documentation on how to do this +1. When creating input/ouput ports for your modules use "add_port_docs" decorator to nicely generate docs for them diff --git a/docs/sources/source/tutorials/neuraltypes.rst b/docs/sources/source/tutorials/neuraltypes.rst index 5620f3737c6f..ebcae6a1a235 100644 --- a/docs/sources/source/tutorials/neuraltypes.rst +++ b/docs/sources/source/tutorials/neuraltypes.rst @@ -1,63 +1,166 @@ Neural Types ============ -Neural Types are used to check input tensors to make sure that two neural modules are compatible, and catch -semantic and dimensionality errors. +Basics +~~~~~~ -Neural Types are implemented by :class:`NeuralType` class which is a mapping from Tensor's axis to :class:`AxisType`. +All input and output ports of every neural module in NeMo are typed. +The type system's goal is check compatibility of connected input/output port pairs. +The type system's constraints are checked when the user connects modules with each other and before any training or +inference is started. -:class:`AxisType` contains following information per axis: +Neural Types are implemented with the Python class :class:`NeuralType` and helper +classes derived from :class:`ElementType`, :class:`AxisType` and :class:`AxisKindAbstract`. -* Semantic Tag, which must inherit from :class:`BaseTag`, for example: :class:`BatchTag`, :class:`ChannelTag`, :class:`TimeTag`, etc. These tags can be related via `is-a` inheritance. -* Dimension: unsigned integer -* Descriptor: string +**A Neural Type contains two categories of information:** +* **axes** - represents what varying a particular axis means (e.g. batch, time, etc.) +* **elements_type** - represents the semantics and properties of what is stored inside the activations (audio signal,text embedding, logits, etc.) -To instantiate a NeuralType you should pass it a dictionary (axis2type) which will map axis to it's AxisType. -For example, a ResNet18 input and output ports can be described as: + +To instantiate a NeuralType you need to pass it the following arguments: `axes: Optional[Tuple] = None, +elements_type: ElementType = VoidType(), optional=False`. Typically, the only place where you need to instantiate +:class:`NeuralType` objects are inside your module's `input_ports` and +`output_ports` properties. + + +Consider an example below. It represents an (audio) data layer output ports, used in Speech recognition collection. .. code-block:: python - input_ports = {"x": NeuralType({0: AxisType(BatchTag), - 1: AxisType(ChannelTag), - 2: AxisType(HeightTag, 224), - 3: AxisType(WidthTag, 224)})} - output_ports = {"output": NeuralType({ - 0: AxisType(BatchTag), - 1: AxisType(ChannelTag)})} + { + 'audio_signal': NeuralType(axes=(AxisType(kind=AxisKind.Batch, size=None, is_list=False), + AxisType(kind=AxisKind.Time, size=None, is_list=False)), + elements_type=AudioSignal(freq=self._sample_rate)), + 'a_sig_length': NeuralType(axes=tuple(AxisType(kind=AxisKind.Batch, size=None, is_list=False)), + elements_type=LengthsType()), + 'transcripts': NeuralType(axes=(AxisType(kind=AxisKind.Batch, size=None, is_list=False), + AxisType(kind=AxisKind.Time, size=None, is_list=False)), + elements_type=LabelsType()), + 'transcript_length': NeuralType(axes=tuple(AxisType(kind=AxisKind.Batch, size=None, is_list=False)), + elements_type=LengthsType()), + } + +A less verbose version of exactly the same output ports looks like this: +.. code-block:: python + { + 'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)), + 'a_sig_length': NeuralType(tuple('B'), LengthsType()), + 'transcripts': NeuralType(('B', 'T'), LabelsType()), + 'transcript_length': NeuralType(tuple('B'), LengthsType()), + } -**Neural type comparison** -Two :class:`NeuralType` objects can be compared using ``.compare`` method. -The result is: + +Neural type comparison +~~~~~~~~~~~~~~~~~~~~~~ + +Two :class:`NeuralType` objects are compared using ``.compare`` method. +The result is from the :class:`NeuralTypeComparisonResult`: .. code-block:: python class NeuralTypeComparisonResult(Enum): - """The result of comparing two neural type objects for compatibility. - When comparing A.compare_to(B):""" - SAME = 0 - LESS = 1 # A is B - GREATER = 2 # B is A - DIM_INCOMPATIBLE = 3 # Resize connector might fix incompatibility - TRANSPOSE_SAME = 4 # A transpose will make them same - INCOMPATIBLE = 5 # A and B are incompatible. Can't fix incompatibility automatically + """The result of comparing two neural type objects for compatibility. + When comparing A.compare_to(B):""" + + SAME = 0 + LESS = 1 # A is B + GREATER = 2 # B is A + DIM_INCOMPATIBLE = 3 # Resize connector might fix incompatibility + TRANSPOSE_SAME = 4 # A transpose and/or converting between lists and tensors will make them same + CONTAINER_SIZE_MISMATCH = 5 # A and B contain different number of elements + INCOMPATIBLE = 6 # A and B are incompatible + SAME_TYPE_INCOMPATIBLE_PARAMS = 7 # A and B are of the same type but parametrized differently + + +Special cases +~~~~~~~~~~~~~ + +* **Void** element types. Sometimes, it is necessary to have a functionality similar to "void*" in C/C++. That, is if we still want to enforce order and axes' semantics but should be able to accept elements of any type. This can be achieved by using an instance of :class:`VoidType` as ``elements_type`` argument. +* **Big void** this type will effectively disable any type checks. This is how to create such type: ``NeuralType()``. The result of its comparison to any other type will always be SAME. +* **AxisKind.Any** this axis kind is used to represent any axis. This is useful, for example, in losses where a specific loss module can be used in difference applications and therefore with different axis kinds + +Inheritance +~~~~~~~~~~~ + +Type inheritance is a very powerful tool in programming. NeMo's neural types support inheritance. Consider the +following example below. + +**Example.** We want to represent the following. A module's A output (out1) produces mel-spectrogram +signal, while module's B output produces mffc-spectrogram. We also want to a thrid module C which can perform data +augmentation with any kind of spectrogram. With NeMo neural types representing this semantics is easy: +.. code-block:: python + + input = NeuralType(('B', 'D', 'T'), SpectrogramType()) + out1 = NeuralType(('B', 'D', 'T'), MelSpectrogramType()) + out2 = NeuralType(('B', 'D', 'T'), MFCCSpectrogramType()) + + # then the following comparison results will be generated + input.compare(out1) == SAME + input.compare(out2) == SAME + out1.compare(input) == INCOMPATIBLE + out2.compare(out1) == INCOMPATIBLE + +This happens because both ``MelSpectrogramType`` and ``MFCCSpectrogramType`` inherit from ``SpectrogramType`` class. +Notice, that mfcc and mel spectrograms aren't interchangable, which is why ``out1.compare(input) == INCOMPATIBLE`` -**Special cases** +Advanced usage +~~~~~~~~~~~~~~ -* *Non-tensor* objects should be denoted as ``NeuralType(None)`` -* *Optional*: input is as optional, if input is provided the type compatibility will be checked -* *Root* type is denoted by ``NeuralType({})``: A port of ``NeuralType({})`` type must accept NmTensors of any NeuralType: +**Extending with user-defined types.** If you need to add your own element types, create a new class inheriting from +:class:`ElementType`. Instead of using built-in axes kinds from +:class:`AxisKind`, you can define your own +by creating a new Python enum which should inherit from :class:`AxisKindAbstract`. + +**Lists**. Sometimes module's input or output should be a list (possibly nested) of Tensors. NeMo's +:class:`AxisType` class accepts ``is_list`` argument which could be set to True. +Consider the example below: .. code-block:: python - root_type = NeuralType({}) - root_type.compare(any_other_neural_type) == NeuralTypeComparisonResult.SAME + T1 = NeuralType( + axes=( + AxisType(kind=AxisKind.Batch, size=None, is_list=True), + AxisType(kind=AxisKind.Time, size=None, is_list=True), + AxisType(kind=AxisKind.Dimension, size=32, is_list=False), + AxisType(kind=AxisKind.Dimension, size=128, is_list=False), + AxisType(kind=AxisKind.Dimension, size=256, is_list=False), + ), + elements_type=ChannelType(), + ) + +In this example, first two axes are lists. That is the object are list of lists of rank 3 tensors with dimensions +(32x128x256). Note that all list axes must come before any tensor axis. + +.. tip:: + We strongly recommend this to be avoided, if possible, and tensors used instead (perhaps) with padding. -See "nemo/tests/test_neural_types.py" for more examples. + +**Named tuples (structures).** To represent struct-like objects, for example, bounding boxes in computer vision, use +the following syntax: + +.. code-block:: python + + class BoundingBox(ElementType): + def __str__(self): + return "bounding box from detection model" + def fields(self): + return ("X", "Y", "W", "H") + # ALSO ADD new, user-defined, axis kind + class AxisKind2(AxisKindAbstract): + Image = 0 + T1 = NeuralType(elements_type=BoundingBox(), + axes=(AxisType(kind=AxisKind.Batch, size=None, is_list=True), + AxisType(kind=AxisKind2.Image, size=None, is_list=True))) + +In the example above, we create a special "element type" class for BoundingBox which stores exactly 4 values. +We also, add our own axis kind (Image). So the final Neural Type (T1) represents lists (for batch) of lists (for +image) of bounding boxes. Under the hood it should be list(lists(4x1 tensors)). **Neural Types help us to debug models** @@ -76,6 +179,5 @@ For example, module should concatenate (add) two input tensors X and Y along dim A module expects image of size 224x224 but gets 256x256. The type comparison will result in ``NeuralTypeComparisonResult.DIM_INCOMPATIBLE`` . -.. note:: - This type mechanism is represented by Python inheritance. That is, :class:`NmTensor` class inherits from :class:`NeuralType` class. + diff --git a/nemo/backends/pytorch/common/losses.py b/nemo/backends/pytorch/common/losses.py index d7f0f521f90e..dd0d70082f9f 100644 --- a/nemo/backends/pytorch/common/losses.py +++ b/nemo/backends/pytorch/common/losses.py @@ -3,6 +3,7 @@ from nemo.backends.pytorch.nm import LossNM from nemo.core.neural_types import LabelsType, LogitsType, LossType, NeuralType, RegressionValuesType +from nemo.utils.decorators import add_port_docs __all__ = ['SequenceLoss', 'CrossEntropyLoss', 'MSELoss'] @@ -32,18 +33,16 @@ class SequenceLoss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ return {'log_probs': NeuralType(axes=('B', 'T', 'D')), 'targets': NeuralType(axes=('B', 'T'))} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. - - loss: - NeuralType(None) - """ return {"loss": NeuralType(elements_type=LossType())} @@ -103,6 +102,7 @@ class CrossEntropyLoss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -112,6 +112,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -133,6 +134,7 @@ def _loss_function(self, logits, labels): class MSELoss(LossNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -148,6 +150,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/backends/pytorch/common/rnn.py b/nemo/backends/pytorch/common/rnn.py index ca67154786d0..eb9eb7e1e246 100644 --- a/nemo/backends/pytorch/common/rnn.py +++ b/nemo/backends/pytorch/common/rnn.py @@ -23,6 +23,7 @@ from nemo.backends.pytorch.common.parts import Attention from nemo.backends.pytorch.nm import TrainableNM from nemo.core import * +from nemo.utils.decorators import add_port_docs from nemo.utils.misc import pad_to __all__ = ['DecoderRNN', 'EncoderRNN'] @@ -65,6 +66,7 @@ class DecoderRNN(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -78,6 +80,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -203,6 +206,7 @@ class EncoderRNN(TrainableNM): """ Simple RNN-based encoder using GRU cells """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -214,6 +218,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/backends/pytorch/common/search.py b/nemo/backends/pytorch/common/search.py index acaf32213016..b2fc2892e031 100644 --- a/nemo/backends/pytorch/common/search.py +++ b/nemo/backends/pytorch/common/search.py @@ -4,6 +4,7 @@ from nemo.backends.pytorch.nm import NonTrainableNM from nemo.core.neural_types import ChannelType, NeuralType +from nemo.utils.decorators import add_port_docs INF = float('inf') BIG_NUM = 1e4 @@ -29,6 +30,7 @@ class GreedySearch(NonTrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -40,6 +42,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/backends/pytorch/torchvision/data/image_folder.py b/nemo/backends/pytorch/torchvision/data/image_folder.py index 5c4946b5cdd5..3cb3eaa8344d 100644 --- a/nemo/backends/pytorch/torchvision/data/image_folder.py +++ b/nemo/backends/pytorch/torchvision/data/image_folder.py @@ -3,6 +3,7 @@ from .....core import * from ...nm import DataLayerNM +from nemo.utils.decorators import add_port_docs class ImageFolderDataLayer(DataLayerNM): @@ -10,32 +11,22 @@ class ImageFolderDataLayer(DataLayerNM): NeuralModule.""" @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. - - image: - 0: AxisType(BatchTag) - - 1: AxisType(ChannelTag) - - 2: AxisType(HeightTag, input_size) - - 3: AxisType(WidthTag, input_size) - - - label: - 0: AxisType(BatchTag) """ return { - "image": NeuralType( - { - 0: AxisType(BatchTag), - 1: AxisType(ChannelTag), - 2: AxisType(HeightTag, self._input_size), - 3: AxisType(WidthTag, self._input_size), - } - ), - "label": NeuralType({0: AxisType(BatchTag)}), + # "image": NeuralType( + # { + # 0: AxisType(BatchTag), + # 1: AxisType(ChannelTag), + # 2: AxisType(HeightTag, self._input_size), + # 3: AxisType(WidthTag, self._input_size), + # } + # ), + # "label": NeuralType({0: AxisType(BatchTag)}), + "image": NeuralType(elements_type=ChannelType(), axes=('B', 'C', 'H', 'W')), + "label": NeuralType(elements_type=LogitsType(), axes=tuple('B')), } def __init__(self, batch_size, path, input_size=32, shuffle=True, is_eval=False): diff --git a/nemo/backends/pytorch/tutorials/chatbot/modules.py b/nemo/backends/pytorch/tutorials/chatbot/modules.py index 14d704b4d4fc..2459afa158b0 100644 --- a/nemo/backends/pytorch/tutorials/chatbot/modules.py +++ b/nemo/backends/pytorch/tutorials/chatbot/modules.py @@ -12,12 +12,14 @@ from .....core.neural_types import * from ...nm import DataLayerNM, LossNM, TrainableNM from ..chatbot import data +from nemo.utils.decorators import add_port_docs class DialogDataLayer(DataLayerNM): """Class representing data layer for a chatbot.""" @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -71,6 +73,7 @@ class EncoderRNN(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -80,6 +83,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -131,6 +135,7 @@ def forward(self, input_seq, input_lengths, hidden=None): class LuongAttnDecoderRNN(TrainableNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -141,6 +146,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -269,6 +275,7 @@ def forward(self, targets, encoder_outputs, max_target_len): class MaskedXEntropyLoss(LossNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -279,6 +286,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -306,12 +314,14 @@ def _loss_function(self, **kwargs): class GreedyLuongAttnDecoderRNN(TrainableNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ return {"encoder_outputs": NeuralType(('T', 'B', 'D'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/backends/pytorch/tutorials/toys.py b/nemo/backends/pytorch/tutorials/toys.py index 442c841ee836..25fa8bd7c277 100644 --- a/nemo/backends/pytorch/tutorials/toys.py +++ b/nemo/backends/pytorch/tutorials/toys.py @@ -9,12 +9,14 @@ from nemo.backends.pytorch.nm import DataLayerNM, LossNM, TrainableNM from nemo.core import DeviceType, NeuralModule from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs class TaylorNet(TrainableNM): # Note inheritance from TrainableNM """Module which learns Taylor's coefficients.""" @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -24,6 +26,7 @@ def input_ports(self): return {"x": NeuralType(('B', 'D'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -58,6 +61,7 @@ class TaylorNetO(TrainableNM): # Note inheritance from TrainableNM """Module which learns Taylor's coefficients.""" @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -68,6 +72,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -119,6 +124,7 @@ def __len__(self): return self._n @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports """ @@ -168,6 +174,7 @@ def dataset(self): class MSELoss(LossNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -187,6 +194,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -202,6 +210,7 @@ def _loss_function(self, **kwargs): class L1Loss(LossNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -211,6 +220,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -226,6 +236,7 @@ def _loss_function(self, **kwargs): class CrossEntropyLoss(LossNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -235,6 +246,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/asr/audio_preprocessing.py b/nemo/collections/asr/audio_preprocessing.py index a35d35d7f00e..34914957d8e0 100644 --- a/nemo/collections/asr/audio_preprocessing.py +++ b/nemo/collections/asr/audio_preprocessing.py @@ -37,6 +37,7 @@ from nemo.backends.pytorch import NonTrainableNM from nemo.core import Optimization from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs try: import torchaudio @@ -120,6 +121,7 @@ class AudioToSpectrogramPreprocessor(AudioPreprocessor): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -131,6 +133,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -270,6 +273,7 @@ class AudioToMelSpectrogramPreprocessor(AudioPreprocessor): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -281,6 +285,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -416,6 +421,7 @@ class AudioToMFCCPreprocessor(AudioPreprocessor): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -427,6 +433,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -546,6 +553,7 @@ class SpectrogramAugmentation(NonTrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -556,6 +564,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -609,6 +618,7 @@ class MultiplyBatch(NonTrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -624,6 +634,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/asr/beam_search_decoder.py b/nemo/collections/asr/beam_search_decoder.py index 87561a5d1a31..13640b2f476f 100644 --- a/nemo/collections/asr/beam_search_decoder.py +++ b/nemo/collections/asr/beam_search_decoder.py @@ -7,6 +7,7 @@ from nemo.backends.pytorch.nm import NonTrainableNM from nemo.core import DeviceType from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs from nemo.utils.helpers import get_cuda_device @@ -36,6 +37,7 @@ class BeamSearchDecoderWithLM(NonTrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -47,6 +49,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/asr/data_layer.py b/nemo/collections/asr/data_layer.py index 4b44e8366878..3f038cd90d78 100644 --- a/nemo/collections/asr/data_layer.py +++ b/nemo/collections/asr/data_layer.py @@ -26,6 +26,7 @@ from nemo.backends.pytorch import DataLayerNM from nemo.core import DeviceType from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs from nemo.utils.misc import pad_to __all__ = [ @@ -97,6 +98,7 @@ class AudioToTextDataLayer(DataLayerNM): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -105,7 +107,12 @@ def output_ports(self): # 'a_sig_length': NeuralType({0: AxisType(BatchTag)}), # 'transcripts': NeuralType({0: AxisType(BatchTag), 1: AxisType(TimeTag)}), # 'transcript_length': NeuralType({0: AxisType(BatchTag)}), - 'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)), + 'audio_signal': NeuralType( + ('B', 'T'), + AudioSignal(freq=self._sample_rate) + if self is not None and self._sample_rate is not None + else AudioSignal(), + ), 'a_sig_length': NeuralType(tuple('B'), LengthsType()), 'transcripts': NeuralType(('B', 'T'), LabelsType()), 'transcript_length': NeuralType(tuple('B'), LengthsType()), @@ -218,6 +225,7 @@ class KaldiFeatureDataLayer(DataLayerNM): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -344,6 +352,7 @@ class TranscriptDataLayer(DataLayerNM): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/asr/greedy_ctc_decoder.py b/nemo/collections/asr/greedy_ctc_decoder.py index 2d49011e7235..287db80cd8bf 100644 --- a/nemo/collections/asr/greedy_ctc_decoder.py +++ b/nemo/collections/asr/greedy_ctc_decoder.py @@ -3,6 +3,7 @@ from nemo.backends.pytorch.nm import TrainableNM from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs class GreedyCTCDecoder(TrainableNM): @@ -11,6 +12,7 @@ class GreedyCTCDecoder(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -18,6 +20,7 @@ def input_ports(self): return {"log_probs": NeuralType(('B', 'T', 'D'), LogprobsType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/asr/jasper.py b/nemo/collections/asr/jasper.py index 7923c2078171..77665db0caaa 100644 --- a/nemo/collections/asr/jasper.py +++ b/nemo/collections/asr/jasper.py @@ -8,6 +8,7 @@ from .parts.jasper import JasperBlock, init_weights, jasper_activations from nemo.backends.pytorch.nm import TrainableNM from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs class JasperEncoder(TrainableNM): @@ -86,6 +87,7 @@ class JasperEncoder(TrainableNM): length: Optional[torch.Tensor] @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -99,6 +101,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -198,6 +201,7 @@ class JasperDecoderForCTC(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -209,6 +213,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/asr/las/misc.py b/nemo/collections/asr/las/misc.py index 56519e143fd8..b977b81218c3 100644 --- a/nemo/collections/asr/las/misc.py +++ b/nemo/collections/asr/las/misc.py @@ -5,6 +5,7 @@ from nemo.backends.pytorch.nm import TrainableNM from nemo.collections.asr.jasper import init_weights as jasper_init_weights from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs class JasperRNNConnector(TrainableNM): @@ -18,6 +19,7 @@ class JasperRNNConnector(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -25,6 +27,7 @@ def input_ports(self): return {'tensor': NeuralType(('B', 'D', 'T'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/asr/losses.py b/nemo/collections/asr/losses.py index dc640f42018f..d8714187cd2e 100644 --- a/nemo/collections/asr/losses.py +++ b/nemo/collections/asr/losses.py @@ -4,6 +4,7 @@ from nemo.backends.pytorch.nm import LossNM from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs class CTCLossNM(LossNM): @@ -15,6 +16,7 @@ class CTCLossNM(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -30,6 +32,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. loss: diff --git a/nemo/collections/nlp/nm/data_layers/glue_benchmark_datalayer.py b/nemo/collections/nlp/nm/data_layers/glue_benchmark_datalayer.py index ac5ae86cca6c..dca9324b7817 100644 --- a/nemo/collections/nlp/nm/data_layers/glue_benchmark_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/glue_benchmark_datalayer.py @@ -17,6 +17,7 @@ from nemo.collections.nlp.data import GLUEDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import CategoricalValuesType, ChannelType, NeuralType, RegressionValuesType +from nemo.utils.decorators import add_port_docs __all__ = ['GlueClassificationDataLayer', 'GlueRegressionDataLayer'] @@ -34,6 +35,7 @@ class GlueClassificationDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -85,6 +87,7 @@ class GlueRegressionDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/data_layers/joint_intent_slot_datalayer.py b/nemo/collections/nlp/nm/data_layers/joint_intent_slot_datalayer.py index c306cfcccc04..df3731cfa454 100644 --- a/nemo/collections/nlp/nm/data_layers/joint_intent_slot_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/joint_intent_slot_datalayer.py @@ -17,6 +17,7 @@ from nemo.collections.nlp.data import BertJointIntentSlotDataset, BertJointIntentSlotInferDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['BertJointIntentSlotDataLayer', 'BertJointIntentSlotInferDataLayer'] @@ -41,6 +42,7 @@ class BertJointIntentSlotDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -109,6 +111,7 @@ class BertJointIntentSlotInferDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/data_layers/lm_bert_datalayer.py b/nemo/collections/nlp/nm/data_layers/lm_bert_datalayer.py index 98c1ba23c10f..176a7cc67a59 100644 --- a/nemo/collections/nlp/nm/data_layers/lm_bert_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/lm_bert_datalayer.py @@ -26,6 +26,7 @@ from nemo.collections.nlp.data import BertPretrainingDataset, BertPretrainingPreprocessedDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, LabelsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['BertPretrainingDataLayer', 'BertPretrainingPreprocessedDataLayer'] @@ -46,6 +47,7 @@ class BertPretrainingDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -91,6 +93,7 @@ class BertPretrainingPreprocessedDataLayer(DataLayerNM): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/data_layers/lm_transformer_datalayer.py b/nemo/collections/nlp/nm/data_layers/lm_transformer_datalayer.py index ebd1b2a738d0..a81cb1568c69 100644 --- a/nemo/collections/nlp/nm/data_layers/lm_transformer_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/lm_transformer_datalayer.py @@ -17,6 +17,7 @@ from nemo.collections.nlp.data import LanguageModelingDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, LabelsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['LanguageModelingDataLayer'] @@ -34,6 +35,7 @@ class LanguageModelingDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/data_layers/machine_translation_datalayer.py b/nemo/collections/nlp/nm/data_layers/machine_translation_datalayer.py index 44f877f5dcc3..33fa833fa7a6 100644 --- a/nemo/collections/nlp/nm/data_layers/machine_translation_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/machine_translation_datalayer.py @@ -21,6 +21,7 @@ from nemo.collections.nlp.data import TranslationDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, LabelsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['TranslationDataLayer'] @@ -44,6 +45,7 @@ class TranslationDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/data_layers/punctuation_capitalization_datalayer.py b/nemo/collections/nlp/nm/data_layers/punctuation_capitalization_datalayer.py index e3cfeda2235a..16de9a8956e7 100644 --- a/nemo/collections/nlp/nm/data_layers/punctuation_capitalization_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/punctuation_capitalization_datalayer.py @@ -17,12 +17,14 @@ from nemo.collections.nlp.data import BertPunctuationCapitalizationDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, LabelsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['PunctuationCapitalizationDataLayer'] class PunctuationCapitalizationDataLayer(TextDataLayer): @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/data_layers/qa_squad_datalayer.py b/nemo/collections/nlp/nm/data_layers/qa_squad_datalayer.py index 096ed3a7764a..bac2b70b92ff 100644 --- a/nemo/collections/nlp/nm/data_layers/qa_squad_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/qa_squad_datalayer.py @@ -17,6 +17,7 @@ from nemo.collections.nlp.data import SquadDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, LabelsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['BertQuestionAnsweringDataLayer'] @@ -46,6 +47,7 @@ class BertQuestionAnsweringDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/data_layers/state_tracking_trade_datalayer.py b/nemo/collections/nlp/nm/data_layers/state_tracking_trade_datalayer.py index 2b7e3800928a..02088916ac83 100644 --- a/nemo/collections/nlp/nm/data_layers/state_tracking_trade_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/state_tracking_trade_datalayer.py @@ -44,12 +44,14 @@ from nemo.collections.nlp.data.datasets import MultiWOZDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core.neural_types import ChannelType, LabelsType, LengthsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['MultiWOZDataLayer'] class MultiWOZDataLayer(TextDataLayer): @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/data_layers/text_classification_datalayer.py b/nemo/collections/nlp/nm/data_layers/text_classification_datalayer.py index a104a5a543f5..2d6e60e0af58 100644 --- a/nemo/collections/nlp/nm/data_layers/text_classification_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/text_classification_datalayer.py @@ -17,6 +17,7 @@ from nemo.collections.nlp.data import BertTextClassificationDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, LabelsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['BertSentenceClassificationDataLayer'] @@ -34,6 +35,7 @@ class BertSentenceClassificationDataLayer(TextDataLayer): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/data_layers/token_classification_datalayer.py b/nemo/collections/nlp/nm/data_layers/token_classification_datalayer.py index 5fd6cbe2ee5b..8110fcf16e1b 100644 --- a/nemo/collections/nlp/nm/data_layers/token_classification_datalayer.py +++ b/nemo/collections/nlp/nm/data_layers/token_classification_datalayer.py @@ -17,12 +17,14 @@ from nemo.collections.nlp.data import BertTokenClassificationDataset, BertTokenClassificationInferDataset from nemo.collections.nlp.nm.data_layers.text_datalayer import TextDataLayer from nemo.core import ChannelType, LabelsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['BertTokenClassificationDataLayer', 'BertTokenClassificationInferDataLayer'] class BertTokenClassificationDataLayer(TextDataLayer): @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -75,6 +77,7 @@ def __init__( class BertTokenClassificationInferDataLayer(TextDataLayer): @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/losses/aggregator_loss.py b/nemo/collections/nlp/nm/losses/aggregator_loss.py index b1681c7048cb..3165e19af29b 100644 --- a/nemo/collections/nlp/nm/losses/aggregator_loss.py +++ b/nemo/collections/nlp/nm/losses/aggregator_loss.py @@ -16,6 +16,7 @@ from nemo.backends.pytorch import LossNM from nemo.core import LossType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['LossAggregatorNM'] @@ -29,6 +30,7 @@ class LossAggregatorNM(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -40,6 +42,7 @@ def input_ports(self): return input_ports @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/losses/joint_intent_slot_loss.py b/nemo/collections/nlp/nm/losses/joint_intent_slot_loss.py index ce73176747d7..be5b87936c75 100644 --- a/nemo/collections/nlp/nm/losses/joint_intent_slot_loss.py +++ b/nemo/collections/nlp/nm/losses/joint_intent_slot_loss.py @@ -19,6 +19,7 @@ from nemo.backends.pytorch import LossNM from nemo.core import ChannelType, LogitsType, LossType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['JointIntentSlotLoss'] @@ -46,6 +47,7 @@ class JointIntentSlotLoss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -64,6 +66,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/losses/masked_language_modeling_loss.py b/nemo/collections/nlp/nm/losses/masked_language_modeling_loss.py index 38f5169bf348..b29667b1aee0 100644 --- a/nemo/collections/nlp/nm/losses/masked_language_modeling_loss.py +++ b/nemo/collections/nlp/nm/losses/masked_language_modeling_loss.py @@ -17,6 +17,7 @@ from nemo.backends.pytorch import LossNM from nemo.collections.nlp.nm.losses.smoothed_cross_entropy_loss import SmoothedCrossEntropyLoss from nemo.core import ChannelType, LogitsType, LossType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['MaskedLanguageModelingLossNM'] @@ -30,6 +31,7 @@ class MaskedLanguageModelingLossNM(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -43,6 +45,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/losses/padded_smoothed_cross_entropy_loss.py b/nemo/collections/nlp/nm/losses/padded_smoothed_cross_entropy_loss.py index 1564f43c40b0..dfae9e852987 100644 --- a/nemo/collections/nlp/nm/losses/padded_smoothed_cross_entropy_loss.py +++ b/nemo/collections/nlp/nm/losses/padded_smoothed_cross_entropy_loss.py @@ -18,6 +18,7 @@ from nemo.collections.nlp.nm.losses.smoothed_cross_entropy_loss import SmoothedCrossEntropyLoss from nemo.collections.nlp.utils.common_nlp_utils import mask_padded_tokens from nemo.core import LabelsType, LogitsType, LossType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['PaddedSmoothedCrossEntropyLossNM'] @@ -36,6 +37,7 @@ class PaddedSmoothedCrossEntropyLossNM(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -47,6 +49,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/losses/qa_squad_loss.py b/nemo/collections/nlp/nm/losses/qa_squad_loss.py index 1237b9255edb..289f98ce989e 100644 --- a/nemo/collections/nlp/nm/losses/qa_squad_loss.py +++ b/nemo/collections/nlp/nm/losses/qa_squad_loss.py @@ -18,6 +18,7 @@ from nemo.backends.pytorch import LossNM from nemo.core import ChannelType, LogitsType, LossType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['QuestionAnsweringLoss'] @@ -36,6 +37,7 @@ class QuestionAnsweringLoss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -49,6 +51,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/losses/state_tracking_trade_loss.py b/nemo/collections/nlp/nm/losses/state_tracking_trade_loss.py index aa67439b9262..7623c8cddc32 100644 --- a/nemo/collections/nlp/nm/losses/state_tracking_trade_loss.py +++ b/nemo/collections/nlp/nm/losses/state_tracking_trade_loss.py @@ -39,7 +39,8 @@ import torch from nemo.backends.pytorch.nm import LossNM -from nemo.core.neural_types import ChannelType, LabelsType, LengthsType, LogitsType, LossType, NeuralType +from nemo.core.neural_types import LabelsType, LengthsType, LogitsType, LossType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['TRADEMaskedCrossEntropy', 'CrossEntropyLoss3D'] @@ -57,6 +58,7 @@ class TRADEMaskedCrossEntropy(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -79,6 +81,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -120,6 +123,7 @@ class CrossEntropyLoss3D(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -131,6 +135,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/losses/token_classification_loss.py b/nemo/collections/nlp/nm/losses/token_classification_loss.py index e27c74e952a3..ec7dad68c499 100644 --- a/nemo/collections/nlp/nm/losses/token_classification_loss.py +++ b/nemo/collections/nlp/nm/losses/token_classification_loss.py @@ -19,6 +19,7 @@ from nemo.backends.pytorch import LossNM from nemo.core import ChannelType, LabelsType, LogitsType, LossType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['TokenClassificationLoss'] @@ -36,6 +37,7 @@ class TokenClassificationLoss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -49,6 +51,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/trainables/common/huggingface/albert_nm.py b/nemo/collections/nlp/nm/trainables/common/huggingface/albert_nm.py index 9df214302072..5279d60efb47 100644 --- a/nemo/collections/nlp/nm/trainables/common/huggingface/albert_nm.py +++ b/nemo/collections/nlp/nm/trainables/common/huggingface/albert_nm.py @@ -26,6 +26,7 @@ from nemo.backends.pytorch.nm import TrainableNM from nemo.core.neural_modules import PretrainedModelInfo from nemo.core.neural_types import ChannelType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['Albert'] @@ -52,6 +53,7 @@ class Albert(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -67,6 +69,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/trainables/common/huggingface/bert_nm.py b/nemo/collections/nlp/nm/trainables/common/huggingface/bert_nm.py index 28cc34a4cf0d..a4ac1f9d1c66 100644 --- a/nemo/collections/nlp/nm/trainables/common/huggingface/bert_nm.py +++ b/nemo/collections/nlp/nm/trainables/common/huggingface/bert_nm.py @@ -21,6 +21,7 @@ from nemo.backends.pytorch.nm import TrainableNM from nemo.core.neural_modules import PretrainedModelInfo from nemo.core.neural_types import ChannelType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['BERT'] @@ -47,6 +48,7 @@ class BERT(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -60,6 +62,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/trainables/common/huggingface/roberta_nm.py b/nemo/collections/nlp/nm/trainables/common/huggingface/roberta_nm.py index 2f0396172d3b..650d637bb74e 100644 --- a/nemo/collections/nlp/nm/trainables/common/huggingface/roberta_nm.py +++ b/nemo/collections/nlp/nm/trainables/common/huggingface/roberta_nm.py @@ -26,6 +26,7 @@ from nemo.backends.pytorch.nm import TrainableNM from nemo.core.neural_modules import PretrainedModelInfo from nemo.core.neural_types import ChannelType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['Roberta'] @@ -52,6 +53,7 @@ class Roberta(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -67,6 +69,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/trainables/common/sequence_classification_nm.py b/nemo/collections/nlp/nm/trainables/common/sequence_classification_nm.py index 60b1f2c45e7c..5f938b64d4c2 100644 --- a/nemo/collections/nlp/nm/trainables/common/sequence_classification_nm.py +++ b/nemo/collections/nlp/nm/trainables/common/sequence_classification_nm.py @@ -19,6 +19,7 @@ from nemo.backends.pytorch import MultiLayerPerceptron, TrainableNM from nemo.collections.nlp.nm.trainables.common.transformer.transformer_utils import transformer_weights_init from nemo.core import ChannelType, LogitsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['SequenceClassifier'] @@ -39,12 +40,14 @@ class SequenceClassifier(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ return {"hidden_states": NeuralType(('B', 'T', 'D'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/trainables/common/sequence_regression_nm.py b/nemo/collections/nlp/nm/trainables/common/sequence_regression_nm.py index 0989afd162ad..8f0db64dd48a 100644 --- a/nemo/collections/nlp/nm/trainables/common/sequence_regression_nm.py +++ b/nemo/collections/nlp/nm/trainables/common/sequence_regression_nm.py @@ -19,6 +19,7 @@ from nemo.backends.pytorch import MultiLayerPerceptron, TrainableNM from nemo.collections.nlp.nm.trainables.common.transformer.transformer_utils import transformer_weights_init from nemo.core import ChannelType, NeuralType, RegressionValuesType +from nemo.utils.decorators import add_port_docs __all__ = ['SequenceRegression'] @@ -37,6 +38,7 @@ class SequenceRegression(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -44,6 +46,7 @@ def input_ports(self): return {"hidden_states": NeuralType(('B', 'T', 'D'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/trainables/common/token_classification_nm.py b/nemo/collections/nlp/nm/trainables/common/token_classification_nm.py index 1b4c879906c7..2eefe80ec3c6 100644 --- a/nemo/collections/nlp/nm/trainables/common/token_classification_nm.py +++ b/nemo/collections/nlp/nm/trainables/common/token_classification_nm.py @@ -19,6 +19,7 @@ from nemo.backends.pytorch import MultiLayerPerceptron, TrainableNM from nemo.collections.nlp.nm.trainables.common.transformer.transformer_utils import gelu, transformer_weights_init from nemo.core import ChannelType, LogitsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['BertTokenClassifier', 'TokenClassifier'] @@ -40,6 +41,7 @@ class BertTokenClassifier(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -47,6 +49,7 @@ def input_ports(self): return {"hidden_states": NeuralType(('B', 'T', 'D'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -101,6 +104,7 @@ class TokenClassifier(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -108,6 +112,7 @@ def input_ports(self): return {"hidden_states": NeuralType(('B', 'T', 'C'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/trainables/common/transformer/transformer_nm.py b/nemo/collections/nlp/nm/trainables/common/transformer/transformer_nm.py index db858982adb1..a57d20941f96 100644 --- a/nemo/collections/nlp/nm/trainables/common/transformer/transformer_nm.py +++ b/nemo/collections/nlp/nm/trainables/common/transformer/transformer_nm.py @@ -15,6 +15,7 @@ from nemo.collections.nlp.nm.trainables.common.transformer.transformer_modules import TransformerEmbedding from nemo.collections.nlp.nm.trainables.common.transformer.transformer_utils import transformer_weights_init from nemo.core.neural_types import ChannelType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['TransformerEncoderNM', 'TransformerDecoderNM', 'GreedyLanguageGeneratorNM', 'BeamSearchTranslatorNM'] @@ -45,6 +46,7 @@ class TransformerEncoderNM(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -56,6 +58,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -134,6 +137,7 @@ class TransformerDecoderNM(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -149,6 +153,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -216,6 +221,7 @@ class GreedyLanguageGeneratorNM(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -223,6 +229,7 @@ def input_ports(self): return {"input_ids": NeuralType(('B', 'T'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -272,6 +279,7 @@ class BeamSearchTranslatorNM(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -283,6 +291,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/nlp/nm/trainables/dialogue_state_tracking/state_tracking_trade_nm.py b/nemo/collections/nlp/nm/trainables/dialogue_state_tracking/state_tracking_trade_nm.py index 1e047542e3ba..a576e4be34be 100644 --- a/nemo/collections/nlp/nm/trainables/dialogue_state_tracking/state_tracking_trade_nm.py +++ b/nemo/collections/nlp/nm/trainables/dialogue_state_tracking/state_tracking_trade_nm.py @@ -46,12 +46,14 @@ from nemo.backends.pytorch.nm import TrainableNM from nemo.core.neural_types import ChannelType, LabelsType, LengthsType, LogitsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['TRADEGenerator'] class TRADEGenerator(TrainableNM): @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -81,6 +83,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/nlp/nm/trainables/joint_intent_slot/joint_intent_slot_nm.py b/nemo/collections/nlp/nm/trainables/joint_intent_slot/joint_intent_slot_nm.py index c906417afd6d..4020e6e290b9 100644 --- a/nemo/collections/nlp/nm/trainables/joint_intent_slot/joint_intent_slot_nm.py +++ b/nemo/collections/nlp/nm/trainables/joint_intent_slot/joint_intent_slot_nm.py @@ -19,6 +19,7 @@ from nemo.backends.pytorch import MultiLayerPerceptron, TrainableNM from nemo.collections.nlp.nm.trainables.common.transformer.transformer_utils import transformer_weights_init from nemo.core import ChannelType, LogitsType, NeuralType +from nemo.utils.decorators import add_port_docs __all__ = ['JointIntentSlotClassifier'] @@ -37,6 +38,7 @@ class JointIntentSlotClassifier(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -44,6 +46,7 @@ def input_ports(self): return {"hidden_states": NeuralType(('B', 'T', 'C'), ChannelType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. diff --git a/nemo/collections/simple_gan/gan.py b/nemo/collections/simple_gan/gan.py index b0d39a406d64..d441a45e53ae 100644 --- a/nemo/collections/simple_gan/gan.py +++ b/nemo/collections/simple_gan/gan.py @@ -7,6 +7,7 @@ from nemo.backends.pytorch.nm import DataLayerNM, LossNM, TrainableNM from nemo.core import DeviceType from nemo.core.neural_types import ChannelType, LabelsType, LossType, NeuralType +from nemo.utils.decorators import add_port_docs class SimpleDiscriminator(TrainableNM): @@ -15,6 +16,7 @@ class SimpleDiscriminator(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -31,6 +33,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -65,6 +68,7 @@ class SimpleGenerator(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -81,6 +85,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -128,6 +133,7 @@ class DiscriminatorLoss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. @@ -142,6 +148,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -168,6 +175,7 @@ class GradientPenalty(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -186,6 +194,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -225,6 +234,7 @@ class InterpolateImage(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -250,6 +260,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -286,6 +297,7 @@ class RandomDataLayer(DataLayerNM): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. @@ -351,6 +363,7 @@ class MnistGanDataLayer(DataLayerNM): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/tts/data_layers.py b/nemo/collections/tts/data_layers.py index ed5c18b29d24..fb10e98e2769 100644 --- a/nemo/collections/tts/data_layers.py +++ b/nemo/collections/tts/data_layers.py @@ -6,6 +6,7 @@ from nemo.backends.pytorch.nm import DataLayerNM from nemo.core import DeviceType from nemo.core.neural_types import AudioSignal, LengthsType, NeuralType +from nemo.utils.decorators import add_port_docs logging = nemo.logging @@ -48,6 +49,7 @@ class AudioDataLayer(DataLayerNM): """ @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/tts/tacotron2_modules.py b/nemo/collections/tts/tacotron2_modules.py index 01bada4df8b1..812a96d2a72f 100644 --- a/nemo/collections/tts/tacotron2_modules.py +++ b/nemo/collections/tts/tacotron2_modules.py @@ -9,6 +9,7 @@ from .parts.tacotron2 import Decoder, Encoder, Postnet from nemo.backends.pytorch.nm import LossNM, NonTrainableNM, TrainableNM from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs __all__ = [ "MakeGate", @@ -33,6 +34,7 @@ class TextEmbedding(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -40,6 +42,7 @@ def input_ports(self): return {"char_phone": NeuralType(('B', 'T'), LabelsType())} @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -75,6 +78,7 @@ class Tacotron2Encoder(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -88,6 +92,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -153,6 +158,7 @@ class Tacotron2Decoder(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -170,6 +176,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -270,6 +277,7 @@ class Tacotron2DecoderInfer(Tacotron2Decoder): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -283,6 +291,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -329,6 +338,7 @@ class Tacotron2Postnet(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -340,6 +350,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -388,6 +399,7 @@ class Tacotron2Loss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -415,6 +427,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -468,6 +481,7 @@ class MakeGate(NonTrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -481,6 +495,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/collections/tts/waveglow_modules.py b/nemo/collections/tts/waveglow_modules.py index 2a2c03ae9eaa..d2ee90711b97 100644 --- a/nemo/collections/tts/waveglow_modules.py +++ b/nemo/collections/tts/waveglow_modules.py @@ -7,6 +7,7 @@ from nemo.backends.pytorch.nm import LossNM, TrainableNM from nemo.collections.tts.parts.waveglow import WaveGlow from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs __all__ = ["WaveGlowNM", "WaveGlowInferNM", "WaveGlowLoss"] @@ -39,6 +40,7 @@ class WaveGlowNM(TrainableNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -52,6 +54,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -136,6 +139,7 @@ class WaveGlowInferNM(WaveGlowNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -147,6 +151,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ @@ -225,6 +230,7 @@ class WaveGlowLoss(LossNM): """ @property + @add_port_docs() def input_ports(self): """Returns definitions of module input ports. """ @@ -239,6 +245,7 @@ def input_ports(self): } @property + @add_port_docs() def output_ports(self): """Returns definitions of module output ports. """ diff --git a/nemo/core/neural_types/axes.py b/nemo/core/neural_types/axes.py index 1b3159815a90..073b215e1a4d 100644 --- a/nemo/core/neural_types/axes.py +++ b/nemo/core/neural_types/axes.py @@ -45,6 +45,9 @@ class AxisKind(AxisKindAbstract): Height = 4 Any = 5 + def __repr__(self): + return self.__str__() + def __str__(self): return str(self.name).lower() @@ -83,3 +86,12 @@ def __init__(self, kind: AxisKindAbstract, size: Optional[int] = None, is_list=F self.kind = kind self.size = size self.is_list = is_list + + def __repr__(self): + if self.size is None: + representation = str(self.kind) + else: + representation = f"{str(self.kind)}:{self.size}" + if self.is_list: + representation += "_listdim" + return representation diff --git a/nemo/core/neural_types/elements.py b/nemo/core/neural_types/elements.py index 5d410b90ebde..d963831e2cbc 100644 --- a/nemo/core/neural_types/elements.py +++ b/nemo/core/neural_types/elements.py @@ -49,6 +49,9 @@ class ElementType(ABC): def __str__(self): self.__doc__ + def __repr__(self): + return self.__class__.__name__ + @property def type_parameters(self) -> Dict: """Override this property to parametrize your type. For example, you can specify 'storage' type such as diff --git a/nemo/core/neural_types/neural_type.py b/nemo/core/neural_types/neural_type.py index b36d0c3eba5f..ad38bc290859 100644 --- a/nemo/core/neural_types/neural_type.py +++ b/nemo/core/neural_types/neural_type.py @@ -46,10 +46,11 @@ class NeuralType(object): """ def __str__(self): - return ( - f"axes: {[(c.kind, c.size, c.is_list) for c in self.axes]}\n" - f"elements_type: {self.elements_type.__class__.__name__}" - ) + + if self.axes is not None: + return f"axes: {self.axes}; " f" elements_type: {self.elements_type.__class__.__name__}" + else: + return f"axes: None; " f" elements_type: {self.elements_type.__class__.__name__}" def __init__(self, axes: Optional[Tuple] = None, elements_type: ElementType = VoidType(), optional=False): if not isinstance(elements_type, ElementType): diff --git a/nemo/utils/decorators/__init__.py b/nemo/utils/decorators/__init__.py index a10308813138..d94b5c94f9f7 100644 --- a/nemo/utils/decorators/__init__.py +++ b/nemo/utils/decorators/__init__.py @@ -13,3 +13,4 @@ # limitations under the License. from .deprecated import deprecated +from .port_docs import add_port_docs diff --git a/nemo/utils/decorators/port_docs.py b/nemo/utils/decorators/port_docs.py new file mode 100644 index 000000000000..731ce27f619c --- /dev/null +++ b/nemo/utils/decorators/port_docs.py @@ -0,0 +1,89 @@ +# Copyright (C) NVIDIA. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# The "add_port_docs" decorator is needed to nicely generate neural types in Sphynx for input and output ports + +__all__ = [ + 'add_port_docs', +] + +import functools +import sys + +import wrapt + + +def _normalize_docstring(docstring): + """Normalizes the docstring. + Replaces tabs with spaces, removes leading and trailing blanks lines, and + removes any indentation. + Copied from PEP-257: + https://www.python.org/dev/peps/pep-0257/#handling-docstring-indentation + Args: + docstring: the docstring to normalize + Returns: + The normalized docstring + """ + if not docstring: + return '' + # Convert tabs to spaces (following the normal Python rules) + # and split into a list of lines: + lines = docstring.expandtabs().splitlines() + # Determine minimum indentation (first line doesn't count): + # (we use sys.maxsize because sys.maxint doesn't exist in Python 3) + indent = sys.maxsize + for line in lines[1:]: + stripped = line.lstrip() + if stripped: + indent = min(indent, len(line) - len(stripped)) + # Remove indentation (first line is special): + trimmed = [lines[0].strip()] + if indent < sys.maxsize: + for line in lines[1:]: + trimmed.append(line[indent:].rstrip()) + # Strip off trailing and leading blank lines: + while trimmed and not trimmed[-1]: + trimmed.pop() + while trimmed and not trimmed[0]: + trimmed.pop(0) + # Return a single string: + return '\n'.join(trimmed) + + +def add_port_docs(wrapped=None, instance=None, value=''): + if wrapped is None: + return functools.partial(add_port_docs, value=value) + + @wrapt.decorator + def wrapper(wrapped, instance=None, args=None, kwargs=None): + return wrapped(*args, **kwargs) + + decorated = wrapper(wrapped) + try: + port_2_ntype = decorated(instance) + except: + port_2_ntype = None + + port_description = "" + if port_2_ntype is not None: + for port, ntype in port_2_ntype.items(): + port_description += "* *" + port + "* : " + str(ntype) + port_description += "\n\n" + + __doc__ = _normalize_docstring(wrapped.__doc__) + '\n\n' + str(port_description) + __doc__ = _normalize_docstring(__doc__) + + wrapt.FunctionWrapper.__setattr__(decorated, "__doc__", __doc__) + + return decorated diff --git a/tests/core/test_infer.py b/tests/core/test_infer.py index d9b11a3997da..cca655bc5418 100644 --- a/tests/core/test_infer.py +++ b/tests/core/test_infer.py @@ -21,6 +21,7 @@ import nemo from nemo.backends.pytorch.nm import NonTrainableNM from nemo.core.neural_types import * +from nemo.utils.decorators import add_port_docs from tests.common_setup import NeMoUnitTest @@ -29,11 +30,13 @@ def __init__(self): super().__init__() @property + @add_port_docs() def input_ports(self): # return {"mod_in": NeuralType({0: AxisType(BatchTag), 1: AxisType(BaseTag, dim=1)})} return {"mod_in": NeuralType((AxisType(AxisKind.Batch), AxisType(AxisKind.Dimension, 1)), ChannelType())} @property + @add_port_docs() def output_ports(self): # return {"mod_out": NeuralType({0: AxisType(BatchTag), 1: AxisType(BaseTag, dim=1)})} return {"mod_out": NeuralType((AxisType(AxisKind.Batch), AxisType(AxisKind.Dimension, 1)), ChannelType())} @@ -47,10 +50,12 @@ def __init__(self): super().__init__() @property + @add_port_docs() def input_ports(self): return {"mod_in": NeuralType((AxisType(AxisKind.Batch), AxisType(AxisKind.Dimension, 1)), ChannelType())} @property + @add_port_docs() def output_ports(self): return {"mod_out": NeuralType((AxisType(AxisKind.Batch), AxisType(AxisKind.Dimension, 1)), ChannelType())} diff --git a/tests/core/test_neural_types.py b/tests/core/test_neural_types.py index 133e747db3fe..ade6e74ddc02 100644 --- a/tests/core/test_neural_types.py +++ b/tests/core/test_neural_types.py @@ -20,8 +20,10 @@ AcousticEncodedRepresentation, AudioSignal, AxisKind, + AxisKindAbstract, AxisType, ChannelType, + ElementType, MelSpectrogramType, MFCCSpectrogramType, NeuralPortNmTensorMismatchError, @@ -186,3 +188,39 @@ def test_any_axis(self): self.assertEqual(t1.compare(t2), NeuralTypeComparisonResult.SAME) self.assertEqual(t2.compare(t1), NeuralTypeComparisonResult.INCOMPATIBLE) self.assertEqual(t1.compare(t0), NeuralTypeComparisonResult.INCOMPATIBLE) + + def test_struct(self): + class BoundingBox(ElementType): + def __str__(self): + return "bounding box from detection model" + + def fields(self): + return ("X", "Y", "W", "H") + + # ALSO ADD new, user-defined, axis kind + class AxisKind2(AxisKindAbstract): + Image = 0 + + T1 = NeuralType( + elements_type=BoundingBox(), + axes=( + AxisType(kind=AxisKind.Batch, size=None, is_list=True), + AxisType(kind=AxisKind2.Image, size=None, is_list=True), + ), + ) + + class BadBoundingBox(ElementType): + def __str__(self): + return "bad bounding box from detection model" + + def fields(self): + return ("X", "Y", "H") + + T2 = NeuralType( + elements_type=BadBoundingBox(), + axes=( + AxisType(kind=AxisKind.Batch, size=None, is_list=True), + AxisType(kind=AxisKind2.Image, size=None, is_list=True), + ), + ) + self.assertEqual(T2.compare(T1), NeuralTypeComparisonResult.INCOMPATIBLE)