This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
-
Updated
Apr 17, 2024 - Python
This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
LEMMA: An effective and explainable way to detect multimodal misinformation with LVLM and external knowledge augmentation, incorporating the intuition and reasoning capbility inside LVLM.
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
Code for USENIX Security 2024 paper: Moderating Illicit Online Image Promotion for Unsafe User Generated Content Games Using Large Vision-Language Models.
Add a description, image, and links to the lvlm topic page so that developers can more easily learn about it.
To associate your repository with the lvlm topic, visit your repo's landing page and select "manage topics."