Skip to content

parvathysarat/pretrained-lm-tasks

Repository files navigation

Demonstration of question answering and classification tasks using pretrained LMs.

  1. Google BERT's Question Answering model used for QA on Wikipedia text (no fine-tuning).
  2. fast.ai's ULMFiT (Universal Language Model Fine tuned for Classification) further fine-tuned on a simple dataset and used for binary classification task.

Both language models are open for use.

Releases

No releases published

Packages

No packages published