Skip to content

krobertslab/pretrained-clinical-embeddings

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

pretrained-clinical-embeddings

Resourses of pre-trained language models on clinical texts.

Pre-trained models

As of July 8, 2019, the following models have been made available:

Acknowledgments

We are grateful to the authors of BERT and ELMo to make the pre-training codes and instructions publicly available. We are also thankful to the MIMIC-III team for providing valuable resources about clinical text. Please follow the instructions to get the access of MIMIC-III data before downloading the above pre-trained models.

Citation

If you use models available in this repository, we would be grateful if you would cite the paper as follows:

  • Si, Yuqi, Jingqi Wang, Hua Xu, and Kirk Roberts. 2019. “Enhancing Clinical Concept Extraction with Contextual Embeddings.” Journal of the American Medical Informatics Association, July, ocz096. https://doi.org/10.1093/jamia/ocz096.
@article{si_enhancing_2019,
	title = {Enhancing clinical concept extraction with contextual embeddings},
	issn = {1527-974X},
	url = {https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocz096/5527248},
	doi = {10.1093/jamia/ocz096},
	language = {en},
	urldate = {2019-07-09},
	journal = {Journal of the American Medical Informatics Association},
	author = {Si, Yuqi and Wang, Jingqi and Xu, Hua and Roberts, Kirk},
	month = jul,
	year = {2019},
	pages = {ocz096}
}

About

Resourses of pre-trained word representations on clinical texts.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published