Skip to content

Releases: OpenNMT/OpenNMT-py

OpenNMT-py v0.7.1

24 Jan 21:23
5d6f23b
Compare
Choose a tag to compare

Many fixes and code refactoring thanks @bpopeters, @flauted, @guillaumekln

New features
Random sampling thanks @daphnei
Enable sharding for huge files at translation

OpenNMT-py v0.7.0

02 Jan 17:48
f240346
Compare
Choose a tag to compare

Many fixes and code refactoring thanks @benopeters
Migrated to Pytorch 1.0

OpenNMT-py v0.6.0

28 Nov 10:55
6a8a57f
Compare
Choose a tag to compare

Mostly fixes and code improvements.

New: yml config files. See the config folder

OpenNMT-py v0.5.0

24 Oct 19:17
32af678
Compare
Choose a tag to compare

Ability to reset the optimizer when using -train_from

-reset_optim = ['none', 'all', 'states', 'keep_states']
none: default behavior as before
all: reset the optimizer !! steps start at zero again.
states: reset only states, keep all other parameters from checkpoint
keep_states: keep current states from checkpoint, but allow to change parameters (learning_rate for instance)

Bug fixes.
Tested with Pytorch 1.0RC works fine.

OpenNMT-py v0.4.1

11 Oct 08:57
70a99a9
Compare
Choose a tag to compare
  • fix preprocess filenames introduced by new sharding.

OpenNMT-py v0.4

08 Oct 18:23
6de42cd
Compare
Choose a tag to compare

Fixed Speech2Text training (thanks Yuntian)

Removed -max_shard_size, replaced by -shard_size = number of examples in a shard.

Default value = 1M which works fine in most Text dataset cases. (will avoid Ram OOM in most cases)

OpenNMT-py v0.3

27 Sep 16:18
beaf22b
Compare
Choose a tag to compare

Now requires Pytorch 0.4.1

Multi-node Multi-GPU with Torch Distributed

New options are:
-master_ip: ip address of the master node
-master_port: port number of th emaster node
-world_size = total number of processes to be run (total GPUs accross all nodes)
-gpu_ranks = list of indices of processes accross all nodes

-gpuid is deprecated

See examples in https://github.com/OpenNMT/OpenNMT-py/blob/master/docs/source/FAQ.md

Fixes to img2text now working

New sharding based on number of examples

Fixes to avoid 0.4.1 deprecated functions.

OpenNMT-py v0.2.1

31 Aug 14:33
6db7ec1
Compare
Choose a tag to compare

Fixes and improvements

  • First compatibility steps with Pytorch 0.4.1 (non breaking)
  • Fix TranslationServer (when various request try to load the same model at the same time)
  • Fix StopIteration error (python 3.7)

New features

  • Ensemble at inference (thanks @Waino) see FAQ

Last Pytorch 0.4.0 version

28 Aug 19:05
e723f2a
Compare
Choose a tag to compare

New in this release:

Multi-GPU based on torch distributed (acknowledgement to Fairseq)
Change from Epoch to Step (see opts.py)
Average Attention Network (AAN) for the Transformer (thanks @francoishernandez )
New fast beam search (see -fast in translate.py) (thanks @guillaumekln)
Sparse attention / sparsemax (thanks to @bpopeters)

and many fixes.

This is the last version with pytorch 0.4.0
Next 0.4.1 pytorch version includes breakings changes.

Pytorch 0.3 Last Release

08 Jun 16:18
0ecec8b
Compare
Choose a tag to compare
Merge pull request #680 from OpenNMT/torch0.4

Fix softmaxes