You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Introduce a new metric bias and corresponding evaluation-dataset which should quantify the intrinsic bias of different text-classification-models. The different demographical features for which bias should be quantified are (at least):
Gender
Male
Female
Transgender
Non-binary
Ethnicity
Caucasian
Non-caucasian
Religion
Christianity
Islam
Judaism
Sexuality
Heterosexuality
Homosexuality
Bisexuality
Describe the solution you'd like
This could be a synthetically produced dataset created from an already existing dataset. As an example for gender the dataset could include a set of sentences which includes pronouns which are then varied, and an indicator for which demographic the sentence belongs to, i.e.
("Male", "Han var meget smuk"),
("Female", "Hun var meget smuk")
("Non-binary", "Hen var meget smuk")
...
This is largely an open question, so give it a go!
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Introduce a new metric
bias
and corresponding evaluation-dataset which should quantify the intrinsic bias of differenttext-classification
-models. The different demographical features for which bias should be quantified are (at least):Describe the solution you'd like
This could be a synthetically produced dataset created from an already existing dataset. As an example for gender the dataset could include a set of sentences which includes pronouns which are then varied, and an indicator for which demographic the sentence belongs to, i.e.
This is largely an open question, so give it a go!
The text was updated successfully, but these errors were encountered: