Skip to main content

A package for detail image caption evaluation.

Project description

Benchmarking and Improving Detail Image Caption

License

All code and data will be released soon.

Code and data for paper:

Benchmarking and Improving Detail Image Caption. Hongyuan Dong*, Jiawen Li*, Bohong Wu, Jiacong Wang, Yuan Zhang, Haoyuan Guo (* Equal Contribution)

Our paper is now available on arXiv.

Overview

Image captioning has long been regarded as a fundamental task in visual understanding. Recently, however, few large vision-language model (LVLM) research discusses model's image captioning performance because of the outdated short-caption benchmarks and unreliable evaluation metrics. In this work, we propose to benchmark detail image caption task by curating high-quality evaluation datasets annotated by human experts, GPT-4V and Gemini-1.5-Pro. We also design a more reliable caption evaluation metric called CAPTURE (CAPtion evaluation by exTracting and coUpling coRE information). CAPTURE extracts visual elements, e.g., objects, attributes and relations from captions, and then matches these elements through three stages, achieving the highest consistency with expert judgements over other rule-based or model-based caption metrics. The proposed benchmark and metric provide reliable evaluation for LVLM's detailed image captioning ability. Guided by this evaluation, we further explore to unleash LVLM's detail caption capabilities by synthesizing high-quality data through a five-stage data construction pipeline. Our pipeline only uses a given LVLM itself and other open-source tools, without any human or GPT-4V annotation in the loop. Experiments show that the proposed data construction strategy significantly improves model-generated detail caption data quality for LVLMs with leading performance, and the data quality can be further improved in a self-looping paradigm.

Detail Image Caption Benchmark

We release the DetailCaps-4870 benchmark, which contains 4870 images with high-quality reference captions annotated by GPT-4V&Gemini-1.5-Pro. The statistics of DetailCaps-4870 compared with other image caption benchmarks of comparables sizes is shown below:

Benchmark Data source Annt. expert Img num ref num Avg len Uni. 2-gram
COCOtest COCO Human $5000$ $25,010$ $10.59$ $61,448$
Nocapsval Openimages Human $4500$ $45,000$ $11.49$ $116,969$
DetailCaps-100 COCO, SAM, LAION, CC, SBU GPT-4V&Human $100$ $100$ $175.96$ $10,858$
DetailCaps-4870 COCO, SAM, LAION, CC, SBU, Coyo, Flickr GPT-4V&Gemini-1.5-Pro $4870$ $9740$ $122.06$ $377,184$

The evaluation dataset will soon be available on Huggingface. Please download the dataset and put it under the datasets folder.

Detail Image Caption Evaluation Metric: CAPTURE

The proposed metric CAPTURE (CAPtion evaluation by exTracting and coUpling coRE information) achieves the highest consistency with expert judgements on DetailCaps benchmarks. We show the average consistency scores on DetailCaps-100 and DetailCaps-4870 benchmarks in the table below.

Caption metric PCC $\rho$ $\uparrow$ $1-R^2$ $\downarrow$ Kendall's $\tau$ $\uparrow$ Sample $\tau$ $\uparrow$
BLEU $0.2625$ $60.57$ $0.1879$ $0.2488$
ROUGE-L $0.2923$ $138.06$ $0.2127$ $0.3312$
CIDEr $0.1024$ $1.99e^7$ $0.1034$ $0.0756$
METEOR $0.4015$ $289.02$ $0.2922$ $0.4075$
SPICE $0.4368$ $128.85$ $0.3230$ $0.4687$
CLIPScore $0.3498$ $32.46$ $0.2423$ $0.3519$
CAPTURE $0.5051$ $8.20$ $0.3822$ $0.5927$

Improving Detail Image Caption

We construct a data construction pipeline to unleash LVLM's detail image captioning ability with open-source vision and language tools. We show the performance of the performance of the proposed data construction pipeline with different LVLM bachbones below.

Caption DetailCaps-100 DetailCaps-4870 Average
LLaVA-1.5-7B self $51.23$ $51.27$ $51.25$
LLaVA-1.5-7B syn $57.11$ $56.18$ $56.64$
LLaVA-1.5-13B self $51.76$ $51.45$ $51.61$
LLaVA-1.5-13B syn $57.36$ $56.83$ $57.09$
LLaVA-NEXT-7B self $61.48$ $59.86$ $60.67$
LLaVA-NEXT-7B syn $62.24$ $60.10$ $61.17$
Mini-Gemini-7B-HD self $59.51$ $57.68$ $58.59$
Mini-Gemini-7B-HD syn $60.44$ $58.64$ $59.54$

Quick Start

Environment

Run the following scripts to prepare the environment for CAPTURE and the data construction pipeline.

conda create -n detailcaption python=3.9
conda activate detailcaption
bash prepare.sh

Detail Image Caption Evaluation

To evaluate the performance of a LVLM on DetailCaps-4870, run the following scripts.

bash evaluate.sh <model_prediction>

<model_prediction> is the path of the model-generated caption file. Please organize your results in the following format:

{
    'id': '0001',
    'caption': 'A man is walking on the street.'
}, 
......

Detail Image Caption Construction

For detail image caption construction, first download SAM, Owlv2, LLaVA-v1.5 (or other LVLM), LLaMA-2 and place them under ckpt folder:

ckpt
├─sam
|  ├─sam_vit_h_4b8939.pth
|  └─sam_vit_l_0b3195.pth
├─owlv2-large-patch14-ensemble
├─llava-v1.5-13b
├─llava-v1.5-7b
├─llava-v1.5-13b
├─Llama-2-7b-chat-hf
└─Llama-2-13b-chat-hf

Then organize your image data in .parquet format with binary image stored in the frame field. Run the followig script to generate annotations for your parquet data files stored in <source_path>. <model_size> should be set as either 7b or 13b, corresponding to pipelines for different model size.

bash generate_all_annotations.sh <model_size> <source_path>

Citation

@article{dong2024benchmarking,
  title={Benchmarking and Improving Detail Image Caption},
  author={Dong, Hongyuan and Li, Jiawen and Wu, Bohong and Wang, Jiacong and Zhang, Yuan and Guo, Haoyuan},
  journal={arXiv preprint arXiv:2405.19092},
  year={2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

capture_eval_metric-0.1.0.tar.gz (4.3 kB view details)

Uploaded Source

Built Distribution

capture_eval_metric-0.1.0-py3-none-any.whl (4.1 kB view details)

Uploaded Python 3

File details

Details for the file capture_eval_metric-0.1.0.tar.gz.

File metadata

  • Download URL: capture_eval_metric-0.1.0.tar.gz
  • Upload date:
  • Size: 4.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.2

File hashes

Hashes for capture_eval_metric-0.1.0.tar.gz
Algorithm Hash digest
SHA256 88eea7abc54eeb02fc47664b240c069d7fe581d100e37e949706f27f049cce85
MD5 985bcd484da7eacc6f4d1eb3d13a8923
BLAKE2b-256 27147329ca9cc7ca80d7af33df45cf109ff4afd49325be3222865e5711fcc026

See more details on using hashes here.

File details

Details for the file capture_eval_metric-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for capture_eval_metric-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 576e0f0e3d87373cb1a24f0a1c6edc074fe070715b888d443403c4c17b03896e
MD5 c5d5c2a339aa57bd05cc3c5879025575
BLAKE2b-256 24cadb5d2037cf12b2bae73d5465a5db32c53166f0a9876766617cced5628e8d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page