This repo is for the paper Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models (ICML2024).
@inproceedings{
wu2024evaluating,
title={Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models},
author={Mingrui Wu and Jiayi Ji and Oucheng Huang and Jiale Li and Yuhang Wu and Xiaoshuai Sun and Rongrong Ji},
booktitle={Forty-first International Conference on Machine Learning},
year={2024},
url={https://openreview.net/forum?id=xpSlt67vxQ}
}
Download R-Bench. The main annotation files include:
- image-level_filterd.json
- instance-level_filterd.json
- nocaps_pope_obj_random_image.json
- nocaps_pope_obj_popular_image.json
- nocaps_pope_obj_adversarial_image.json
- web_data
These files contain annotations for image-level, instance-level, pope-object, and web-data questions. For image-level and instance-level questions, we randomly sampled five subsets into the [type]_ids_[subset].json
files.
Download the images from Open Image validation set (v4).
To run LVLM on R-Bench using the official inference script of the LVLMs, and format the result file as follows:
{"question_id": 0, "text":[model output]}
{"question_id": 1, "text":[model output]}
...
Tips: We provide instance-level question tools in utils.py
. Please use the draw_mask
and draw_box
functions to draw the mask and box on input images, respectively. Additionally, use the instance_qs_construct
function to reformat the instance questions.
And eval with,
sh eval.sh
The evaluation code is based on POPE.