up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
-
Updated
Aug 5, 2024
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
These notes and resources are compiled from the crash course Prompt Engineering for Vision Models offered by DeepLearning.AI.
✨✨Latest Advances on Multimodal Large Language Models
Official Repository of Multi-Object Hallucination in Vision-Language Models
Curated papers on Large Language Models in Healthcare and Medical domain
A curated list of recent and past chart understanding work based on our survey paper: From Pixels to Insights: A Survey on Automatic Chart Understanding in the Era of Large Foundation Models.
The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
[ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.
An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions
🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).
Advances in recent large vision language models (LVLMs)
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
[ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions
✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering
ShareGPT4Omni: Towards Building Omni Large Multi-modal Models with Comprehensive Multi-modal Annotations
Code and data for the ACL 2024 Findings paper "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning"
This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Add a description, image, and links to the large-vision-language-models topic page so that developers can more easily learn about it.
To associate your repository with the large-vision-language-models topic, visit your repo's landing page and select "manage topics."