Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Entity Recognition #45

Open
vamsitharun opened this issue Apr 2, 2019 · 4 comments
Open

Custom Entity Recognition #45

vamsitharun opened this issue Apr 2, 2019 · 4 comments

Comments

@vamsitharun
Copy link

How to train custom entity labels other than PER, LOC, ORG & MISC?

I need entities like "total amount" from a document.

@guillaumegenthial
Copy link
Owner

The code is data agnostic: if you provide the right vocab files / data files, it will be able to learn any task.

@VioletJKI
Copy link

@guillaumegenthial Hi, I used some tags other than PER, LOC, ORG &MISC,but when I used conlleval to evaluate the predictions, it only has accuray not zero, precision,recall and FB1 are all zeros, and there isn't evaluation results for each tag. The output of conlleval is as follows, can you tell me what's wrong? Thank you~

processed 120652 tokens with 0 phrases; found: 0 phrases; correct: 0.
accuracy: 97.37%; precision: 0.00%; recall: 0.00%; FB1: 0.00

@ahmadshabbir2468
Copy link

@guillaumegenthial Hi, I used some tags other than PER, LOC, ORG &MISC,but when I used conlleval to evaluate the predictions, it only has accuray not zero, precision,recall and FB1 are all zeros, and there isn't evaluation results for each tag. The output of conlleval is as follows, can you tell me what's wrong? Thank you~

processed 120652 tokens with 0 phrases; found: 0 phrases; correct: 0.
accuracy: 97.37%; precision: 0.00%; recall: 0.00%; FB1: 0.00

Face same problem ? Did you able to resolve this problem

@karthikeyansam
Copy link

karthikeyansam commented Sep 17, 2019

I think you need to provide -r inorder get the result for raw tags. below are the options

conlleval: evaluate result of processing CoNLL-2000 shared task
usage: conlleval [-l] [-r] [-d delimiterTag] [-o oTag] < file
README: http://cnts.uia.ac.be/conll2000/chunking/output.html
options: l: generate LaTeX output for tables like in
http://cnts.uia.ac.be/conll2003/ner/example.tex
r: accept raw result tags (without B- and I- prefix;
assumes one word per chunk)
d: alternative delimiter tag (default is single space)
o: alternative outside tag (default is O)
note: the file should contain lines with items separated
by $delimiter characters (default space). The final
two items should contain the correct tag and the
guessed tag in that order. Sentences should be
separated from each other by empty lines or lines
with $boundary fields (default -X-).
url: http://lcg-www.uia.ac.be/conll2000/chunking/
started: 1998-09-25
version: 2004-01-26
author: Erik Tjong Kim Sang erikt@uia.ua.ac.be

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants