Hierarchical Gaussian Filter (HGF) model of conditioned hallucinations task (Powers et al 2017)
-
Updated
Dec 18, 2018 - MATLAB
Hierarchical Gaussian Filter (HGF) model of conditioned hallucinations task (Powers et al 2017)
Code for PARENTing via Model-Agnostic Reinforcement Learning to Correct Pathological Behaviors in Data-to-Text Generation (Rebuffel, Soulier, Scoutheeten, Gallinari; INLG 2020)
Codes related to the paper "On hallucinations in tomographic imaging"
Code for Controlling Hallucinations at Word Level in Data-to-Text Generation (C. Rebuffel, M. Roberti, L. Soulier, G. Scoutheeten, R. Cancelliere, P. Gallinari)
A PyTorch implementation of the paper Thinking Hallucination for Video Captioning.
Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"
The purpose of this application is to test LLM-generated interpretations of medical observations. The explanations are generated fully automatically by a large language model. This application should be used for experimental purposes only. It does not provide support for real world cases and does not replace advice from medical professionals.
Hallucinate - GPT - LLM - AI Chat - OpenAI - Sam Altman info
[TruthGPT](https://github.com/SingularityLabs-ai/TruthGPT-mini) for google
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating
The full pipeline of creating UHGEval hallucination dataset
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
An Easy-to-use Hallucination Detection Framework for LLMs.
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
[ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"
Attack to induce LLMs within hallucinations
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
[ACL 2024] Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
Add a description, image, and links to the hallucinations topic page so that developers can more easily learn about it.
To associate your repository with the hallucinations topic, visit your repo's landing page and select "manage topics."