Skip to content

A comprehensive survey on Internal Consistency and Self-Feedback in Large Language Models.

Notifications You must be signed in to change notification settings

IAAR-Shanghai/ICSFSurvey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Internal Consistency and Self-Feedback in Large Language Models: A Survey

Xun Liang1*, Shichao Song1*, Zifan Zheng2*, Hanyu Wang1, Qingchen Yu2, Xunkai Li3, Rong-Hua Li3, Peng Cheng4, Zhonghao Wang4, Feiyu Xiong2, Zhiyu Li2†

1RUC, 2IAAR, 3BIT, 4Xinhua
*Equal contribution, Corresponding author (lizy@iaar.ac.cn)

Important

  • Consider giving our repository a 🌟, so you will receive the latest news (paper list updates, new comments, etc.);
  • If you want to cite our work, here is our bibtex entry: CITATION.bib.

📰 News

  • 2024/08/24 Updated paper list for better user experience. Link. Ongoing updates.
  • 2024/07/22 Our paper ranks third on Hugging Face Daily Papers! Link.
  • 2024/07/21 Our paper is now available on arXiv. Link.

🎉 Introduction

Welcome to the GitHub repository for our survey paper titled "Internal Consistency and Self-Feedback in Large Language Models: A Survey." The survey's goal is to provide a unified perspective on the self-evaluation and self-updating mechanisms in LLMs, encapsulated within the frameworks of Internal Consistency and Self-Feedback.

This repository includes three key resources:

Click Me to Show the Table of Contents

📚 Paper List

Here we list the most important references cited in our survey, as well as the papers we consider worth noting. This list will be updated regularly.

Related Survey Papers

These are some of the most relevant surveys related to our paper.

  • Extrinsic Hallucinations in LLMs
    OpenAI, Blog, 2024 [Paper]

  • When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of Self-Correction of LLMs
    PSU, arXiv, 2024 [Paper]

  • A Survey on Self-Evolution of Large Language Models
    PKU, arXiv, 2024 [Paper] [Code]

  • Demystifying Chains, Trees, and Graphs of Thoughts
    ETH, arXiv, 2024 [Paper]

  • Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies
    UCSB, TACL, 2024 [Paper] [Code]

  • Uncertainty in Natural Language Processing: Sources, Quantification, and Applications
    Nankai, arXiv, 2023 [Paper]

Section IV: Consistency Signal Acquisition

For various forms of expressions from an LLM, we can obtain various forms of consistency signals, which can help in better updating the expressions.

Confidence Estimation

  • Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs
    NUS, ICLR, 2024 [Paper] [Code]

  • Linguistic Calibration of Long-Form Generations
    Stanford, ICML, 2024 [Paper] [Code]

  • InternalInspector I2: Robust Confidence Estimation in LLMs through Internal States
    VT, arXiv, 2024 [Paper]

  • Cycles of Thought: Measuring LLM Confidence through Stable Explanations
    UCLA, arXiv, 2024 [Paper]

  • TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness
    UoEdin, arXiv, 2024 [Paper] [Code]

  • Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
    Oxford, ICLR, 2023 [Paper] [Code]

  • Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness
    UMD, arXiv, 2023 [Paper]

  • Teaching models to express their uncertainty in words
    Oxford, TMLR, 2022 [Paper] [Code]

  • Language Models (Mostly) Know What They Know
    Anthropic, arXiv, 2022 [Paper]

Hallucination Detection

  • Detecting hallucinations in large language models using semantic entropy
    Oxford, Nature, 2024 [Paper]

  • INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection
    Alibaba, ICLR, 2024 [Paper]

  • LLM Internal States Reveal Hallucination Risk Faced With a Query
    HKUST, arXiv, 2024 [Paper]

  • Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals
    Fudan, arXiv, 2024 [Paper]

  • Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method
    SDU, NAACL, 2024 [Paper] [Code]

  • LM vs LM: Detecting Factual Errors via Cross Examination
    TAU, EMNLP, 2023 [Paper]

  • SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
    Cambridge, EMNLP, 2023 [Paper] [Code]

Uncertainty Estimation

  • Generating with Confidence: Uncertainty Quantification for Black-box Large Language Models
    UIUC, TMLR, 2024 [Paper] [Code]

  • Uncertainty Estimation of Large Language Models in Medical Question Answering
    HKU, arXiv, 2024 [Paper]

  • To Believe or Not to Believe Your LLM
    Google, arXiv, 2024 [Paper]

  • Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models
    DU, ACL, 2024 [Paper] [Code]

  • Active Prompting with Chain-of-Thought for Large Language Models
    HUST, arXiv, 2023 [Paper] [Code]

  • Uncertainty Estimation in Autoregressive Structured Prediction
    Yandex, ICLR, 2021 [Paper]

  • On Hallucination and Predictive Uncertainty in Conditional Language Generation
    UCSB, EACL, 2021 [Paper]

Verbal Critiquing

  • LLM Critics Help Catch LLM Bugs
    OpenAI, arXiv, 2024 [Paper]

  • Reasons to Reject? Aligning Language Models with Judgments
    Tencent, ACL, 2024 [Paper] [Code]

  • Self-critiquing models for assisting human evaluators
    OpenAI, arXiv, 2022 [Paper]

Faithfulness Measurement

  • Are self-explanations from Large Language Models faithful?
    Mila, ACL, 2024 [Paper]

  • On Measuring Faithfulness or Self-consistency of Natural Language Explanations
    UAH, ACL, 2024 [Paper] [Code]

Consistency Estimation

  • Semantic Consistency for Assuring Reliability of Large Language Models
    DTU, arXiv, 2023 [Paper]

Section V: Reasoning Elevation

Enhancing reasoning ability by improving LLM performance on QA tasks through Self-Feedback strategies.

Reasoning Topologically

  • DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines
    Stanford, ICLR, 2024 [Paper] [Code]

  • Graph of Thoughts: Solving Elaborate Problems with Large Language Models
    ETH, AAAI, 2024 [Paper] [Code]

  • Integrate the Essence and Eliminate the Dross: Fine-Grained Self-Consistency for Free-Form Language Generation
    BIT, ACL, 2024 [Paper] [Code]

  • Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
    PKU, arXiv, 2024 [Paper] [Code]

  • RATT: A Thought Structure for Coherent and Correct LLM Reasoning
    PSU, arXiv, 2024 [Paper] [Code]

  • Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
    Stanford, arXiv, 2024 [Paper] [Code]

  • Chain-of-Thought Reasoning Without Prompting
    Google, arXiv, 2024 [Paper]

  • Self-Contrast: Better Reflection Through Inconsistent Solving Perspectives
    ZJU, ACL, 2024 [Paper]

  • LLMs cannot find reasoning errors, but can correct them given the error location
    Cambridge, ACL, 2024 [Paper]

  • Forward-Backward Reasoning in Large Language Models for Mathematical Verification
    SUSTech, ACL, 2024 [Paper] [Code]

  • LeanReasoner: Boosting Complex Logical Reasoning with Lean
    JHU, NAACL, 2024 [Paper] [Code]

  • Just Ask One More Time! Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios
    Kuaishou, ACL, 2024 [Paper]

  • Soft Self-Consistency Improves Language Model Agents
    UNC-CH, ACL, 2024 [Paper] [Code]

  • Self-Evaluation Guided Beam Search for Reasoning
    NUS, NeuIPS, 2023 [Paper] [Code]

  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models
    Princeton, NeuIPS, 2023 [Paper] [Code]

  • Self-Consistency Improves Chain of Thought Reasoning in Language Models
    Google, ICLR, 2023 [Paper]

  • DSPy Assertions: Computational Constraints for Self-Refining Language Model Pipelines
    Stanford, arXiv, 2023 [Paper] [Code]

  • Universal Self-Consistency for Large Language Model Generation
    Google, arXiv, 2023 [Paper]

  • Enhancing Large Language Models in Coding Through Multi-Perspective Self-Consistency
    PKU, ACL, 2023 [Paper] [Code]

  • Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
    Google, arXiv, 2023 [Paper]

  • Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
    Stanford, arXiv, 2023 [Paper] [Code]

  • Making Language Models Better Reasoners with Step-Aware Verifier
    PKU, ACL, 2023 [Paper] [Code]

  • Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
    Google, NeuIPS, 2022 [Paper]

  • Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
    Washington, EMNLP, 2022 [Paper] [Code]

Refining with Responses

  • Small Language Models Need Strong Verifiers to Self-Correct Reasoning
    UMich, ACL, 2024 [Paper] [Code]

  • Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models
    TUDa, arXiv, 2024 [Paper] [Code]

  • Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B
    Fudan, arXiv, 2024 [Paper] [Code]

  • Teaching Language Models to Self-Improve by Learning from Language Feedback
    NEU, ACL, 2024 [Paper]

  • Large Language Models Can Self-Improve At Web Agent Tasks
    UPenn, arXiv, 2024 [Paper]

  • Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
    Tencent, arXiv, 2024 [Paper]

  • Can LLMs Learn from Previous Mistakes? Investigating LLMs’ Errors to Boost for Reasoning
    UCSD, ACL, 2024 [Paper] [Code]

  • Fine-Grained Self-Endorsement Improves Factuality and Reasoning
    XMU, ACL, 2024 [Paper]

  • Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning
    KCL, ACL, 2024 [Paper] [Code]

  • Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation
    CUHK, ACL, 2024 [Paper] [Code]

  • Self-Rewarding Language Models
    Meta, arXiv, 2024 [Paper]

  • Learning From Mistakes Makes LLM Better Reasoner
    Microsoft, arXiv, 2024 [Paper] [Code]

  • Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision
    CMU, NeuIPS, 2023 [Paper] [Code]

  • Large Language Models Can Self-Improve
    Illinois, EMNLP, 2023 [Paper]

  • Improving Logical Consistency in Pre-Trained Language Models using Natural Language Inference
    Stanford, Stanford CS224N Custom Project, 2022 [Paper]

  • Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference
    Stanford, EMNLP, 2022 [Paper] [Code]

Multi-Agent Collaboration

  • The Consensus Game: Language Model Generation via Equilibrium Search
    MIT, ICLR, 2024 [Paper]

  • Improving Factuality and Reasoning in Language Models through Multiagent Debate
    MIT, ICML, 2024 [Paper] [Code]

  • Scaling Large-Language-Model-based Multi-Agent Collaboration
    THU, arXiv, 2024 [Paper] [Code]

  • AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning
    ZJU, ACL, 2024 [Paper] [Code]

  • ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs
    UNC, ACL, 2024 [Paper] [Code]

  • REFINER: Reasoning Feedback on Intermediate Representations
    EPFL, EACL, 2024 [Paper] [Code]

  • Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate
    HIT, EMNLP, 2023 [Paper] [Code]

Section VI: Hallucination Alleviation

Improving factual accuracy in open-ended generation and reducing hallucinations through Self-Feedback strategies.

Mitigating Hallucination while Generating

  • Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
    ETH, ICLR, 2024 [Paper] [Code]

  • Mitigating Entity-Level Hallucination in Large Language Models
    THU, arXiv, 2024 [Paper] [Code]

  • Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning
    HKUST, arXiv, 2024 [Paper] [Code]

  • Fine-grained Hallucination Detection and Editing for Language Models
    UoW, arXiv, 2024 [Paper] [Code]

  • EVER: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
    UNC, arXiv, 2023 [Paper]

  • Chain-of-Verification Reduces Hallucination in Large Language Models
    Meta, arXiv, 2023 [Paper]

  • PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
    UCI, arXiv, 2023 [Paper]

  • RARR: Researching and Revising What Language Models Say, Using Language Models
    CMU, ACL, 2023 [Paper] [Code]

Refining the Response Iteratively

  • Teaching Large Language Models to Self-Debug
    Google, ICLR, 2024 [Paper]

  • LLMs can learn self-restraint through iterative self-reflection
    ServiceNow, arXiv, 2024 [Paper]

  • Reflexion: Language Agents with Verbal Reinforcement Learning
    Northeastern, NeuIPS, 2023 [Paper] [Code]

  • Generating Sequences by Learning to Self-Correct
    AI2, ICLR, 2023 [Paper]

  • MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models
    UCSB, EMNLP, 2023 [Paper] [Code]

  • Self-Refine: Iterative Refinement with Self-Feedback
    CMU, NeuIPS, 2023 [Paper] [Code]

  • PEER: A Collaborative Language Model
    Meta, ICLR, 2023 [Paper]

  • Re3: Generating Longer Stories With Recursive Reprompting and Revision
    Berkeley, EMNLP, 2023 [Paper] [Code]

Activating Truthfulness

  • Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning
    BUAA, AAAI, 2024 [Paper] [Code]

  • Look Within, Why LLMs Hallucinate: A Causal Perspective
    NUDT, arXiv, 2024 [Paper]

  • Retrieval Head Mechanistically Explains Long-Context Factuality
    PKU, arXiv, 2024 [Paper] [Code]

  • TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
    ICT, ACL, 2024 [Paper] [Code]

  • Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
    Harvard, NeuIPS, 2023 [Paper] [Code]

  • Fine-tuning Language Models for Factuality
    Stanford, arXiv, 2023 [Paper]

Decoding Truthfully

  • Diver: Large Language Model Decoding with Span-Level Mutual Information Verification
    IA, arXiv, 2024 [Paper]

  • SED: Self-Evaluation Decoding Enhances Large Language Models for Better Generation
    FDU, arXiv, 2024 [Paper]

  • Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
    Edin, arXiv, 2024 [Paper] [Code]

  • DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
    MIT, ICLR, 2024 [Paper] [Code]

  • Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
    UoW, arXiv, 2023 [Paper]

  • Contrastive Decoding: Open-ended Text Generation as Optimization
    Stanford, ACL, 2023 [Paper] [Code]

Section VII: Other Tasks

In addition to tasks aimed at improving consistency (enhancing reasoning and alleviating hallucinations), there are other tasks that also utilize Self-Feedback strategies.

Preference Learning

  • Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
    FDU, arXiv, 2024 [Paper] [Code]

  • Aligning Large Language Models with Self-generated Preference Data
    KAIST, arXiv, 2024 [Paper]

  • Self-Improving Robust Preference Optimization
    Cohere, arXiv, 2024 [Paper]

  • Self-Play Preference Optimization for Language Model Alignment
    UCLA, arXiv, 2024 [Paper] [Code]

  • ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline
    Zhipu, arXiv, 2024 [Paper] [Code]

  • SALMON: Self-Alignment with Instructable Reward Models
    IBM, ICLR, 2024 [Paper] [Code]

  • Self-Specialization: Uncovering Latent Expertise within Large Language Models
    GT, ACL, 2024 [Paper]

  • BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
    PKU, NeurIPS, 2023 [Paper] [Code]

  • Safe RLHF: Safe Reinforcement Learning from Human Feedback
    PKU, arXiv, 2023 [Paper] [Code]

  • Aligning Large Language Models through Synthetic Feedback
    NAVER, arXiv, 2023 [Paper] [Code]

  • OpenAssistant Conversations -- Democratizing Large Language Model Alignment
    Unaffiliated, arXiv, 2023 [Paper] [Code]

  • The Capacity for Moral Self-Correction in Large Language Models
    Anthropic, arXiv, 2023 [Paper]

  • Constitutional AI: Harmlessness from AI Feedback
    Anthropic, arXiv, 2022 [Paper] [Code]

  • Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
    Anthropic, arXiv, 2022 [Paper] [Code]

Knowledge Distillation

  • Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
    RUC, ICLR, 2024 [Paper] [Code]

  • On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes
    Google, ICLR, 2024 [Paper]

  • Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
    Idiap, arXiv, 2024 [Paper]

  • Personalized Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation
    NTU, EMNLP, 2023 [Paper] [Code]

  • SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation
    KAIST, Blog, 2023 [Paper]

  • Reinforced Self-Training (ReST) for Language Modeling
    Google, arXiv, 2023 [Paper]

  • Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
    Washington, arXiv, 2023 [Paper]

  • Self-Knowledge Distillation with Progressive Refinement of Targets
    LG, ICCV, 2021 [Paper] [Code]

  • Revisiting Knowledge Distillation via Label Smoothing Regularization
    NUS, CVPR, 2020 [Paper]

  • Self-Knowledge Distillation in Natural Language Processing
    Handong, RANLP, 2019 [Paper]

Continuous Learning

  • Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching
    CUHK, arXiv, 2024 [Paper]

  • Self-Evolving GPT: A Lifelong Autonomous Experiential Learner
    HIT, ACL, 2024 [Paper] [Code]

Data Synthesis

  • Self-Instruct: Aligning Language Models with Self-Generated Instructions
    Washington, ACL, 2023 [Paper] [Code]

  • Self-training Improves Pre-training for Natural Language Understanding
    Facebook, arXiv, 2020 [Paper]

Consistency Optimization

  • Improving the Robustness of Large Language Models via Consistency Alignment
    SDU, LREC-COLING, 2024 [Paper]

Decision Making

  • Can Large Language Models Play Games? A Case Study of A Self-Play Approach
    Northwestern, arXiv, 2024 [Paper]

Event Argument Extraction

  • ULTRA: Unleash LLMs' Potential for Event Argument Extraction through Hierarchical Modeling and Pair-wise Refinement
    UMich, ACL, 2024 [Paper]

Inference Acceleration

  • Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
    ZJU, ACL, 2024 [Paper] [Code]

Machine Translation

  • TasTe: Teaching Large Language Models to Translate through Self-Reflection
    HIT, ACL, 2024 [Paper] [Code]

Negotiation Optimization

  • Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback
    Edin, arXiv, 2023 [Paper] [Code]

Retrieval Augmented Generation

  • Improving Retrieval Augmented Language Model with Self-Reasoning
    Baidu, arXiv, 2024 [Paper]

Text Classification

  • Text Classification Using Label Names Only: A Language Model Self-Training Approach
    Illinois, EMNLP, 2020 [Paper] [Code]

Section VIII.A: Meta Evaluation

Some common evaluation benchmarks.

Consistency Evaluation

  • Can Large Language Models Always Solve Easy Problems if They Can Solve Harder Ones?
    PKU, arXiv, 2024 [Paper] [Code]

  • Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models
    RUG, EMNLP, 2023 [Paper] [Code]

  • Predicting Question-Answering Performance of Large Language Models through Semantic Consistency
    IBM, GEM, 2023 [Paper] [Code]

  • BECEL: Benchmark for Consistency Evaluation of Language Models
    Oxford, Coling, 2022 [Paper] [Code]

  • Measuring and Improving Consistency in Pretrained Language Models
    BIU, TACL, 2021 [Paper] [Code]

Self-Knowledge Evaluation

  • Can I understand what I create? Self-Knowledge Evaluation of Large Language Models
    THU, arXiv, 2024 [Paper]

  • Can AI Assistants Know What They Don't Know?
    Fudan, arXiv, 2024 [Paper] [Code]

  • Do Large Language Models Know What They Don’t Know?
    Fudan, ACL, 2023 [Paper] [Code]

Uncertainty Evaluation

  • UBENCH: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions
    Nankai, arXiv, 2024 [Paper] [Code]

  • Benchmarking LLMs via Uncertainty Quantification
    Tencent, arXiv, 2024 [Paper] [Code]

Feedback Ability Evaluation

  • CriticBench: Benchmarking LLMs for Critique-Correct Reasoning
    THU, ACL, 2024 [Paper] [Code]

Theoretical Perspectives

Some theoretical research on Internal Consistency and Self-Feedback strategies.

  • AI models collapse when trained on recursively generated data
    Oxford, Nature, 2024 [Paper]

  • A Theoretical Understanding of Self-Correction through In-context Alignment
    MIT, ICML, 2024 [Paper]

  • Large Language Models Cannot Self-Correct Reasoning Yet
    Google, ICLR, 2024 [Paper]

  • When Can Transformers Count to n?
    NYU, arXiv, 2024 [Paper]

  • Large Language Models as Reliable Knowledge Bases?
    UoE, arXiv, 2024 [Paper]

  • States Hidden in Hidden States: LLMs Emerge Discrete State Representations Implicitly
    THU, arXiv, 2024 [Paper]

  • Large Language Models have Intrinsic Self-Correction Ability
    UB, arXiv, 2024 [Paper]

  • What Did I Do Wrong? Quantifying LLMs' Sensitivity and Consistency to Prompt Engineering
    NECLab, arXiv, 2024 [Paper] [Code]

  • Large Language Models Must Be Taught to Know What They Don't Know
    NYU, arXiv, 2024 [Paper] [Code]

  • Are LLMs classical or nonmonotonic reasoners? Lessons from generics
    UvA, ACL, 2024 [Paper] [Code]

  • On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and Latent Concept
    MSU, arXiv, 2024 [Paper]

  • Calibrating Reasoning in Language Models with Internal Consistency
    SJTU, arXiv, 2024 [Paper]

  • Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?
    TAU, arXiv, 2024 [Paper]

  • Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
    OSU, arXiv, 2024 [Paper] [Code]

  • SELF-[IN]CORRECT: LLMs Struggle with Refining Self-Generated Responses
    JHU, arXiv, 2024 [Paper]

  • Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models
    RUC, ACL, 2024 [Paper] [Code]

  • Do Large Language Models Latently Perform Multi-Hop Reasoning?
    TAU, arXiv, 2024 [Paper]

  • Pride and Prejudice: LLM Amplifies Self-Bias in Self-Refinement
    UCSB, ACL, 2024 [Paper] [Code]

  • The Impact of Reasoning Step Length on Large Language Models
    Rutgers, ACL, 2024 [Paper] [Code]

  • Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
    ASU, NeurIPS, 2023 [Paper]

  • GPT-4 Doesn’t Know It’s Wrong: An Analysis of Iterative Prompting for Reasoning Problems
    ASU, NeurIPS, 2023 [Paper]

  • Lost in the Middle: How Language Models Use Long Contexts
    Stanford, TACL, 2023 [Paper]

  • How Language Model Hallucinations Can Snowball
    NYU, arXiv, 2023 [Paper] [Code]

  • On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence
    UCB, FITEE, 2022 [Paper]

  • On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
    Washington, FAccT, 2021 [Paper]

  • How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
    CMU, TACL, 2021 [Paper] [Code]

  • Language Models as Knowledge Bases?
    Facebook, EMNLP, 2019 [Paper] [Code]

📝 Citation

@article{liang2024internal,
  title={Internal consistency and self-feedback in large language models: A survey},
  author={Liang, Xun and Song, Shichao and Zheng, Zifan and Wang, Hanyu and Yu, Qingchen and Li, Xunkai and Li, Rong-Hua and Cheng, Peng and Wang, Zhonghao and Xiong, Feiyu and Li, Zhiyu},
  journal={arXiv preprint arXiv:2407.14507},
  year={2024}
}