Skip to content

albertotestoni/ndq_visual_objects

Repository files navigation

Repository for "Naming, Describing, and Quantifying Visual Objects in Humans and LLMs"

Paper link: https://arxiv.org/abs/2403.06935

Installation

Install the required packages (preferrably through a virtual environment) through:

pip install -r requiements.txt

For installing LLaVA, please follow the instructions on the original github repository here. Do keep in mind that while LLaVA-v1.6 exists, this project was done using LLaVA-v1.5. To use the newer version, change the model name in main.py.

For FROMAGe, model weights can be obtained here.

Usage

Runs can be performed by executing main.py with command line prompts. As an example, running on the NOUN dataset using BLIP2, with 3 samples per image and a top-p of 0.7 for nucleus sampling, use the following line:

python -m --model blip2 --dataset noun --top_p 0.7 --samples 3

Results are saved in the results folder under the respective model and dataset combination.

The data can be further processed for analysis using process_data.py and plot_results.py.

Acknowledgements

We would like to thank the authors of the LLaVA, BLIP-2 and FRoMAGe papers for their outstanding work on their respective models. The codebase for their models can be found here, here and here.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages