Mitigating a language model's over-confidence with NLI predictions on Multi-NLI hypotheses with random word order using PAWS (paraphrase) and Winogrande (anaphora).
-
Updated
May 28, 2024 - Jupyter Notebook
Mitigating a language model's over-confidence with NLI predictions on Multi-NLI hypotheses with random word order using PAWS (paraphrase) and Winogrande (anaphora).
Does BERT Models applied to Winograd Schema possesse Commonsense Reasoning?
Add a description, image, and links to the winogrande topic page so that developers can more easily learn about it.
To associate your repository with the winogrande topic, visit your repo's landing page and select "manage topics."