Skip to content

Releases: jrzaurin/pytorch-widedeep

v0.4.7: individual components can run independently and image treatment replicates that of Pytorch

04 Dec 16:50
2fe4b49
Compare
Choose a tag to compare

The treatment of the image datasets in WideDeepDataset replicates that of Pytorch. In particular this source code:

if isinstance(pic, np.ndarray):
    # handle numpy array
    if pic.ndim == 2:
        pic = pic[:, :, None]

In addition, I have added the possibility of using each of the model components in isolation and independently. This is, one could now use the wide, deepdense (either DeepDense or DeepDenseResnet), deeptext and deepimage independently.

v0.4.6: Added `DeepDenseResnet` and increased code coverage

20 Sep 10:07
Compare
Choose a tag to compare

As suggested in issue #26 , I have added the possibility of the deepdense component that receives the embeddings from categorical columns and the continuous columns being a series of Dense ResNet blocks. This is all available via the class DeepDenseResnet and used identically than before:

deepdense = DeepDenseResnet(...)

model = WideDeep(wide=wide, deepdense=deepdense)

In addition, code coverage has increased to 91%

v0.4.5: Faster, memory efficient Wide component

09 Aug 10:15
627caf4
Compare
Choose a tag to compare

Version 0.4.5 includes a new implementation of the Wide Linear component via an Embedding layer. Previous versions implemented this component using a Linear layer that received one hot encoded features. For large datasets, this was slow and was not memory efficient (See #18 ). Therefore, we decided to replace such implementation with an Embedding layer that receives label encoded features. Note that although the two implementations are equivalent, the latter is indeed faster and moreover significantly more memory efficient.

Also mentioning that the printed loss in the case of Regression is no longer RMSE but MSE. This is done for consistency with the metrics saved in the History callback.

NOTE: this does not change a thing in terms of how one would use the package. pytorch-widedeep can be used in the exact same way as previous versions. However, since the model components have changed, models generated with previous versions are not compatible with this version.

v0.4.2: Added more metrics

21 Jul 16:09
65465a4
Compare
Choose a tag to compare

Added Precision, Recall, FBetaScore and Fscore.

Metrics available are: Accuracy, Precision, Recall, FBetaScore and Fscore

v0.4.1. Added Docs

13 Jul 14:10
Compare
Choose a tag to compare

Added Documentation. Improved code quality and fixed a bug related to the Focal Loss

v0.3.7

22 Feb 15:05
Compare
Choose a tag to compare
setup.py to black format