Skip to content
/ TERA Public
forked from s3prl/s3prl

Self-Supervised Speech Pre-training and Representation Learning Toolkit.

License

Notifications You must be signed in to change notification settings

coco07/TERA

 
 

Repository files navigation

The toolkit has three major usages:

Pretrain

  • Pretrain upstream models, including Mockingjay, Audio ALBERT and TERA.
  • Document: pretrain/README.md

Upstream

  • Easily load most of the existing upstream models with pretrained weights in a unified I/O interface.
  • Pretrained models are registered through torch.hub, which means you can use these models in your own project by one-line plug-and-play without depending on this toolkit's coding style.
  • Document: upstream/README.md

Downstream


Installation

  1. Python >= 3.6
  2. Install sox on your OS
  3. Install s3prl
pip install -e ./
  1. Some upstream models require special dependencies. If you encounter error with a specific upstream model, you can look into the README.md under each upstream folder. E.g., upstream/pase/README.md

About

Self-Supervised Speech Pre-training and Representation Learning Toolkit.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 95.7%
  • Shell 4.2%
  • Dockerfile 0.1%