Skip to content

Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning (CALCS 2018, ACL)

License

Notifications You must be signed in to change notification settings

gentaiscool/multi-task-cs-lm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning

License: MIT

The implementation of Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning (3rd Workshop in Computational Approaches in Linguistic Code-switching, ACL 2018) paper. The code is written in Python using Pytorch.

Supplementary Materials (including the distribution of train, dev, and test) can be found here.

If you use any source codes or datasets included in this toolkit in your work, please cite the following paper. The bibtex is listed below:

@InProceedings{W18-3207,
  author = 	"Winata, Genta Indra
		and Madotto, Andrea
		and Wu, Chien-Sheng
		and Fung, Pascale",
  title = 	"Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning",
  booktitle = 	"Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
  year = 	"2018",
  publisher = 	"Association for Computational Linguistics",
  pages = 	"62--67",
  location = 	"Melbourne, Australia",
  url = 	"http://aclweb.org/anthology/W18-3207"
}

Abstract

Lack of text data has been the major issue on code-switching language modeling. In this paper, we introduce multi-task learning based language model which shares syntax representation of languages to leverage linguistic information and tackle the low resource data issue. Our model jointly learns both language modeling and Part-of-Speech tagging on code-switched utterances. In this way, the model is able to identify the location of code-switching points and improves the prediction of next word. Our approach outperforms standard LSTM based language model, with an improvement of 9.7% and 7.4% in perplexity on SEAME Phase I and Phase II dataset respectively.

Model Architecture

Prerequisites:

  • Python 3.5 or 3.6
  • Pytorch 0.2 (or later)
  • Stanford Core NLP (Tokenization and Segmentation)

Data

SEAME Corpus from LDC: Mandarin-English Code-Switching in South-East Asia

Run the code:

Multi-task

❱❱❱ python main_multi_task.py --tied --clip=0.25 --dropout=0.4 --postagdropout=0.4 --p=0.25 --nhid=500 --postagnhid=500 --emsize=500 --postagemsize=500 --cuda --data=../data/seame_phase2

About

Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning (CALCS 2018, ACL)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages