Skip to main content

OpenCompass VLM Evaluation Kit for Eval-Scope

Project description

VLMEvalKit (the python package name is vlmeval) is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.

🆕 News

📊 Datasets, Models, and Evaluation Results

The performance numbers on our official multi-modal leaderboards can be downloaded from here!

OpenVLM Leaderboard: Download All DETAILED Results.

Supported Image Understanding Dataset

  • By default, all evaluation results are presented in OpenVLM Leaderboard.
  • Abbrs: MCQ: Multi-choice question; Y/N: Yes-or-No Questions; MTT: Benchmark with Multi-turn Conversations; MTI: Benchmark with Multi-Image as Inputs.
Dataset Dataset Names (for run.py) Task Dataset Dataset Names (for run.py) Task
MMBench Series:
MMBench, MMBench-CN, CCBench
MMBench_DEV_[EN/CN]
MMBench_TEST_[EN/CN]
MMBench_DEV_[EN/CN]_V11
MMBench_TEST_[EN/CN]_V11
CCBench
MCQ MMStar MMStar MCQ
MME MME Y/N SEEDBench Series SEEDBench_IMG
SEEDBench2
SEEDBench2_Plus
MCQ
MM-Vet MMVet VQA MMMU MMMU_[DEV_VAL/TEST] MCQ
MathVista MathVista_MINI VQA ScienceQA_IMG ScienceQA_[VAL/TEST] MCQ
COCO Caption COCO_VAL Caption HallusionBench HallusionBench Y/N
OCRVQA* OCRVQA_[TESTCORE/TEST] VQA TextVQA* TextVQA_VAL VQA
ChartQA* ChartQA_TEST VQA AI2D AI2D_[TEST/TEST_NO_MASK] MCQ
LLaVABench LLaVABench VQA DocVQA+ DocVQA_[VAL/TEST] VQA
InfoVQA+ InfoVQA_[VAL/TEST] VQA OCRBench OCRBench VQA
RealWorldQA RealWorldQA MCQ POPE POPE Y/N
Core-MM- CORE_MM (MTI) VQA MMT-Bench MMT-Bench_[VAL/ALL]
MMT-Bench_[VAL/ALL]_MI
MCQ (MTI)
MLLMGuard - MLLMGuard_DS VQA AesBench+ AesBench_[VAL/TEST] MCQ
VCR-wiki + VCR_[EN/ZH]_[EASY/HARD]_[ALL/500/100] VQA MMLongBench-Doc+ MMLongBench_DOC VQA (MTI)
BLINK BLINK MCQ (MTI) MathVision+ MathVision
MathVision_MINI
VQA
MT-VQA MTVQA_TEST VQA MMDU+ MMDU VQA (MTT, MTI)
Q-Bench1 Q-Bench1_[VAL/TEST] MCQ A-Bench A-Bench_[VAL/TEST] MCQ
DUDE+ DUDE VQA (MTI) SlideVQA+ SLIDEVQA
SLIDEVQA_MINI
VQA (MTI)
TaskMeAnything ImageQA Random+ TaskMeAnything_v1_imageqa_random MCQ MMMB and Multilingual MMBench+ MMMB_[ar/cn/en/pt/ru/tr]
MMBench_dev_[ar/cn/en/pt/ru/tr]
MMMB
MTL_MMBench_DEV
PS: MMMB & MTL_MMBench_DEV
are all-in-one names for 6 langs
MCQ
A-OKVQA+ A-OKVQA MCQ MuirBench+ MUIRBench MCQ
GMAI-MMBench+ GMAI-MMBench_VAL MCQ TableVQABench+ TableVQABench VQA
MME-RealWorld+ MME-RealWorld[-CN] MCQ HRBench+ HRBench[4K/8K] MCQ
MathVerse+ MathVerse_MINI
MathVerse_MINI_Vision_Only
MathVerse_MINI_Vision_Dominant
MathVerse_MINI_Vision_Intensive
MathVerse_MINI_Text_Lite
MathVerse_MINI_Text_Dominant
VQA AMBER+ AMBER Y/N
CRPE+ CRPE_[EXIST/RELATION] VQA MMSearch* - -

d reasonable results under the zero-shot setting

+ The evaluation results are not available yet

- Only inference is supported in VLMEvalKit (That includes the TEST splits of some benchmarks that do not include the ground truth answers).

* VLMEvalKit is integrated in its official repository.

VLMEvalKit will use a judge LLM to extract answer from the output if you set the key, otherwise it uses the exact matching mode (find "Yes", "No", "A", "B", "C"... in the output strings). The exact matching can only be applied to the Yes-or-No tasks and the Multi-choice tasks.

Supported Video Understanding Dataset

Dataset Dataset Names (for run.py) Task Dataset Dataset Names (for run.py) Task
MMBench-Video MMBench-Video VQA Video-MME Video-MME MCQ
MVBench MVBench/MVBench_MP4 MCQ

Supported API Models

GPT-4v (20231106, 20240409) 🎞️🚅 GPT-4o 🎞️🚅 Gemini-1.0-Pro 🎞️🚅 Gemini-1.5-Pro 🎞️🚅 Step-1V 🎞️🚅
Reka-[Edge / Flash / Core]🚅 Qwen-VL-[Plus / Max] 🎞️🚅 Claude3-[Haiku / Sonnet / Opus] 🎞️🚅 GLM-4v 🚅 CongRong 🎞️🚅
Claude3.5-Sonnet 🎞️🚅 GPT-4o-Mini 🎞️🚅 Yi-Vision🎞️🚅 Hunyuan-Vision🎞️🚅 BlueLM-V 🎞️🚅

Supported PyTorch / HF Models

IDEFICS-[9B/80B/v2-8B/v3-8B]-Instruct🚅🎞️ InstructBLIP-[7B/13B] LLaVA-[v1-7B/v1.5-7B/v1.5-13B] MiniGPT-4-[v1-7B/v1-13B/v2-7B]
mPLUG-Owl[2/3]🎞️ OpenFlamingo-v2🎞️ PandaGPT-13B Qwen-VL🚅🎞️ , Qwen-VL-Chat🚅🎞️
VisualGLM-6B🚅 InternLM-XComposer-[1/2]🚅 ShareGPT4V-[7B/13B]🚅 TransCore-M
LLaVA (XTuner)🚅 CogVLM-[Chat/Llama3]🚅 ShareCaptioner🚅 CogVLM-Grounding-Generalist🚅
Monkey🚅, Monkey-Chat🚅 EMU2-Chat🚅🎞️ Yi-VL-[6B/34B] MMAlaya🚅
InternLM-XComposer-2.5🚅🎞️ MiniCPM-[V1/V2/V2.5/V2.6]🚅🎞️ OmniLMM-12B InternVL-Chat-[V1-1/V1-2/V1-5/V2]🚅🎞️,
Mini-InternVL-Chat-[2B/4B]-V1-5🚅🎞️
DeepSeek-VL🎞️ LLaVA-NeXT🚅🎞️ Bunny-Llama3🚅 XVERSE-V-13B
PaliGemma-3B 🚅 360VL-70B 🚅 Phi-3-Vision/Phi-3.5-Vision🚅🎞️ WeMM🚅
GLM-4v-9B 🚅 Cambrian-[8B/13B/34B] LLaVA-Next-[Qwen-32B] 🎞️ Chameleon-[7B/30B]🚅🎞️
Video-LLaVA-7B-[HF] 🎬 VILA1.5-[3B/8B/13B/40B]🎞️ Ovis[1.5-Llama3-8B/1.5-Gemma2-9B/1.6-Gemma2-9B/1.6-Llama3.2-3B] 🚅🎞️ Mantis-8B-[siglip-llama3/clip-llama3/Idefics2/Fuyu] 🎞️
Llama-3-MixSenseV1_1🚅 Parrot-7B 🚅 OmChat-v2.0-13B-sinlge-beta 🚅 Video-ChatGPT 🎬
Chat-UniVi-7B[-v1.5] 🎬 LLaMA-VID-7B 🎬 VideoChat2-HD 🎬 PLLaVA-[7B/13B/34B] 🎬
RBDash_72b 🚅🎞️ xgen-mm-phi3-[interleave/dpo]-r-v1.5 🚅🎞️ Qwen2-VL-[2B/7B]🚅🎞️ slime_[7b/8b/13b]🎞️
Eagle-X4-[8B/13B]🚅🎞️,
Eagle-X5-[7B/13B/34B]🚅🎞️
Moondream1-1.8B🚅,
Moondream2-1.8B🚅
XinYuan-VL-2B-Instruct🚅🎞️ Llama-3.2-[11B/90B]-Vision-Instruct🚅
Kosmos2🚅

🎞️: Support multiple images as inputs.

🚅: Models can be used without any additional configuration/operation.

🎬: Support Video as inputs.

Transformers Version Recommendation:

Note that some VLMs may not be able to run under certain transformer versions, we recommend the following settings to evaluate each VLM:

  • Please use transformers==4.33.0 for: Qwen series, Monkey series, InternLM-XComposer Series, mPLUG-Owl2, OpenFlamingo v2, IDEFICS series, VisualGLM, MMAlaya, ShareCaptioner, MiniGPT-4 series, InstructBLIP series, PandaGPT, VXVERSE.
  • Please use transformers==4.36.2 for: Moondream1.
  • Please use transformers==4.37.0 for: LLaVA series, ShareGPT4V series, TransCore-M, LLaVA (XTuner), CogVLM Series, EMU2 Series, Yi-VL Series, MiniCPM-[V1/V2], OmniLMM-12B, DeepSeek-VL series, InternVL series, Cambrian Series, VILA Series, Llama-3-MixSenseV1_1, Parrot-7B, PLLaVA Series.
  • Please use transformers==4.40.0 for: IDEFICS2, Bunny-Llama3, MiniCPM-Llama3-V2.5, 360VL-70B, Phi-3-Vision, WeMM.
  • Please use transformers==4.44.0 for: Moondream2.
  • Please use transformers==latest for: LLaVA-Next series, PaliGemma-3B, Chameleon series, Video-LLaVA-7B-HF, Ovis series, Mantis series, MiniCPM-V2.6, OmChat-v2.0-13B-sinlge-beta, Idefics-3, GLM-4v-9B, VideoChat2-HD, RBDash_72b, Llama-3.2 series, Kosmos series.

Torchvision Version Recommendation:

Note that some VLMs may not be able to run under certain torchvision versions, we recommend the following settings to evaluate each VLM:

  • Please use torchvision>=0.16 for: Moondream series
# Demo
from vlmeval.config import supported_VLM
model = supported_VLM['idefics_9b_instruct']()
# Forward Single Image
ret = model.generate(['assets/apple.jpg', 'What is in this image?'])
print(ret)  # The image features a red apple with a leaf on it.
# Forward Multiple Images
ret = model.generate(['assets/apple.jpg', 'assets/apple.jpg', 'How many apples are there in the provided images? '])
print(ret)  # There are two apples in the provided images.

🏗️ QuickStart

See [QuickStart | 快速开始] for a quick start guide.

🛠️ Development Guide

To develop custom benchmarks, VLMs, or simply contribute other codes to VLMEvalKit, please refer to [Development_Guide | 开发指南].

Call for contributions

To promote the contribution from the community and share the corresponding credit (in the next report update):

  • All Contributions will be acknowledged in the report.
  • Contributors with 3 or more major contributions (implementing an MLLM, benchmark, or major feature) can join the author list of VLMEvalKit Technical Report on ArXiv. Eligible contributors can create an issue or dm kennyutc in VLMEvalKit Discord Channel.

Here is a contributor list we curated based on the records.

🎯 The Goal of VLMEvalKit

The codebase is designed to:

  1. Provide an easy-to-use, opensource evaluation toolkit to make it convenient for researchers & developers to evaluate existing LVLMs and make evaluation results easy to reproduce.
  2. Make it easy for VLM developers to evaluate their own models. To evaluate the VLM on multiple supported benchmarks, one just need to implement a single generate_inner() function, all other workloads (data downloading, data preprocessing, prediction inference, metric calculation) are handled by the codebase.

The codebase is not designed to:

  1. Reproduce the exact accuracy number reported in the original papers of all 3rd party benchmarks. The reason can be two-fold:
    1. VLMEvalKit uses generation-based evaluation for all VLMs (and optionally with LLM-based answer extraction). Meanwhile, some benchmarks may use different approaches (SEEDBench uses PPL-based evaluation, eg.). For those benchmarks, we compare both scores in the corresponding result. We encourage developers to support other evaluation paradigms in the codebase.
    2. By default, we use the same prompt template for all VLMs to evaluate on a benchmark. Meanwhile, some VLMs may have their specific prompt templates (some may not covered by the codebase at this time). We encourage VLM developers to implement their own prompt template in VLMEvalKit, if that is not covered currently. That will help to improve the reproducibility.

🖊️ Citation

If you find this work helpful, please consider to star🌟 this repo. Thanks for your support!

Stargazers repo roster for @open-compass/VLMEvalKit

If you use VLMEvalKit in your research or wish to refer to published OpenSource evaluation results, please use the following BibTeX entry and the BibTex entry corresponding to the specific VLM / benchmark you used.

@misc{duan2024vlmevalkit,
      title={VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models},
      author={Haodong Duan and Junming Yang and Yuxuan Qiao and Xinyu Fang and Lin Chen and Yuan Liu and Xiaoyi Dong and Yuhang Zang and Pan Zhang and Jiaqi Wang and Dahua Lin and Kai Chen},
      year={2024},
      eprint={2407.11691},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.11691},
}

🔝Back to top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ms_vlmeval-0.0.10-py3-none-any.whl (318.5 kB view details)

Uploaded Python 3

File details

Details for the file ms_vlmeval-0.0.10-py3-none-any.whl.

File metadata

  • Download URL: ms_vlmeval-0.0.10-py3-none-any.whl
  • Upload date:
  • Size: 318.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for ms_vlmeval-0.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 7428f1b877bbad00bcaf8d11e92f0769657d0a07b80f5d276e6318ba91988d4a
MD5 80b13e4231f7f6cba39020853f2b30e3
BLAKE2b-256 394949261a9f2fa102a24aa8991d66feac12f86711ba6c3960840215a4b3f775

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page