Skip to main content

A framework for comprehensive evaluation of Vision Language Models on visual analogy tasks.

Project description

IQ-Bench: VLLM Evaluation Framework

IQ-Bench is a modular Python package designed to streamline the evaluation of Vision Language Models (VLMs) on visual analogy tasks.

Key Functionalities

  • Unified Pipeline: The FullPipeline class serves as the single entry point to consolidate data processing, model inference, and evaluation.
  • Diverse Reasoning Strategies: Supports multiple execution methods including Direct, Descriptive, Contrastive, and Classification strategies.
  • Ensemble Capabilities: Allows for model committees that aggregate results via majority voting or confidence scores.
  • Interactive Visualization: Includes a built-in Streamlit dashboard for browsing metrics and model performance.
  • Automated Evaluation: Supports both close-ended and open-ended results evaluations. Features a "judge" model to validate results on open-ended tasks where simple key comparison is insufficient.

Installation & Requirements

System Requirements

  • OS: Linux-based environments are required. Windows users must utilize WSL2.
  • Hardware: Users must estimate required resources (GPU, VRAM etc.) based on the specific VLM models they intend to deploy.

Install the package directly via PyPI:

pip install iqbench

Available Datasets

The framework currently supports 4 datasets for running experiments. These datasets capture a broad visual spectrum covering different types of reasoning tasks, including both open-ended and close-ended tasks.

VCog-Bench

VCog-Bench is a publicly available, zero-shot abstract visual reasoning benchmark designed to evaluate Multimodal Large Language Models. It integrates established AVR datasets and is available on the Hugging Face platform. This framework supports experiments on all subsets: Raven Progressive Matrices (dataset_name = "raven"), CVR (dataset_name = "cvr"), MaRs-VQA (dataset_name = "marsvqa").
Source: VCog-Bench Dataset

Bongard Problems

The Bongard Problems dataset is a classic collection of visual reasoning puzzles introduced by Mikhail Bongard. Each problem consists of two sets of images (typically 6 on the left and 6 on the right), where all images in one set share an abstract visual rule that the other set does not. The task is to identify the rule that distinguishes the two sets-such as differences in shape, topology, symmetry, count, or spatial relations-without being explicitly told what features matter.
Availability: Not available on Hugging Face. Users must provide this dataset manually in the data_raw directory.
Data Sources: * BP Image Repository * Bongard in Wonderland (Annotations)

Supported Models

The framework supports all Vision Language Models (VLMs) compatible with the vLLM package. To use a specific model, user must define its attributes and parameters in a JSON configuration file. The path to this file must be provided as an environment variable MODELS_CONFIG_JSON_PATH. The following models are pre-configured within the package, allowing for immediate deployment without additional manual configuration:

  • InternVL Series (OpenGVLab):
    • InternVL3-8B
    • InternVL3-14B
    • InternVL3-38B
    • InternVL3-78B
  • Qwen Series:
    • Qwen2.5-VL-3B-Instruct
    • Qwen2.5-VL-7B-Instruct
    • Qwen2.5-VL-32B-Instruct
    • Qwen2.5-VL-72B-Instruct
  • LLaVA Series:
    • llava-v1.6-mistral-7b-hf
    • llava-onevision-qwen2-72b-ov-hf
  • Judge LLMs (Evaluation Judges):
    • Mistral-7B-Instruct-v0.3
    • Phi-3.5-mini-instruct

Sample JSON Configuration Setup

Below is a sample configuration file structure.

{
  "OpenGVLab/InternVL3-8B": {
    "model_class": "VLLM",
    "max_tokens_limit": 32000,
    "num_params_billions": 8,
    "gpu_split": false,
    "param_sets": {
      "1": {
        "temperature": 0.5,
        "max_tokens": 16384,
        "max_output_tokens": 2048,
        "limit_mm_per_prompt": 2,
        "cpu_local_testing": false,
        "custom_args": {
          "tensor_parallel_size": 1,
          "gpu_memory_utilization": 0.9
        }
      },
      "2": {
        "temperature": 0.5, ...
        }
      }
    }
  }
}

Available Strategies

Disclaimer: Descriptions of the strategies are of an illustrative nature. For detailed descriptions on how each one works (for a specific dataset) please refer to our paper.

1. Direct Strategy

  • Method: The model is presented with the entire problem at once.
  • Process: A single prompt containing the entire question panel is provided.
  • Goal: Solve the puzzle in one step based on all available visual information.

2. Descriptive Strategy

  • Method: Relies on image-to-text translation.
  • Process: The model describes each choice image individually. These descriptions are concatenated and combined with the task description.
  • Goal: Solve the puzzle based solely on the generated text descriptions rather than the original images.

3. Contrastive Strategy

  • Method: Focuses on relational differences.
  • Process: The model is prompted to describe differences (across rows/columns or between pairs of images) iteratively.
  • Goal: Use the identified differences and the task description to deduce the correct answer.

4. Classification Strategy

  • Method: Reframes the puzzle as a selection task.
  • Process: Multiple versions of the completed problem are generated (one for each possible answer).
  • Goal: The model evaluates each version and selects the one that best preserves the logic or assumptions of the task.

Available Ensembling Strategies

1. Majority Ensemble

  • Mechanism: Uses a voting-based system.
  • Process: For closed-ended problems, it selects the answer that appears most frequently. For Bongard Problems (BP), an LLM synthesizes a consensus from the various proposed answers.

2. Confidence Ensemble

  • Mechanism: Prioritizes the most "certain" predictions.
  • Process: It selects the answer with the highest average confidence score. In specific categories, an LLM evaluates the validity of answers by weighing them against their associated confidence metrics.

3. Reasoning Ensemble

  • Mechanism: Leverages an LLMJudge for qualitative analysis.
  • Process: The aggregator model analyzes not just the final answers, but the underlying reasoning chains provided by all ensemble members to determine the most logical solution.

4. Reasoning Ensemble with Image

  • Mechanism: Multi-modal reasoning aggregation.
  • Process: This extends the standard Reasoning Ensemble by providing the aggregator LLM with the original question image alongside the text-based reasoning candidates to improve context and visual awareness.

Basic Usage

1. Initialize the Pipeline

The FullPipeline class serves as the primary entry point for the library, integrating all available modules into a unified interface.

from iqbench import FullPipeline
pipeline = FullPipeline()

2. Prepare Data

This method handles data acquisition and ensures the output is structured for downstream modules.

pipeline.prepare_data(download=True)

3. Run a single model experiment

Executes inference using a specific dataset, strategy, and model.

pipeline.run_experiment(
    dataset_name="cvr",
    strategy_name="direct",
    model_name="Qwen/Qwen2.5-VL-3B-Instruct",
    param_set_number=1,
    prompt_number=1
)

4. Run ensemble experiment

This method implements the logic for combining the results from the chosen models into an ensemble.

pipeline = FullPipeline()
pipeline.run_ensemble(
    dataset_name="cvr",
    members_configuration=[["direct", "Qwen/Qwen2.5-VL-3B-Instruct", "1"], ["classification", "OpenGVLab/InternVL3-8B", "1"]],
    type_name="reasoning",
    vllm_model_name="Qwen/Qwen2.5-VL-3B-Instruct",
    llm_model_name="Qwen/Qwen2.5-VL-3B-Instruct"
)

5. Evaluate

Allows for results evaluation.

from iqbench.technical.configs import EvaluationConfig

# for running single experiment evaluation
eval_config = EvaluationConfig(
    dataset_name="cvr",
    version="1",
    strategy_name="direct",
    model_name="Qwen/Qwen2.5-VL-3B-Instruct",
    ensemble=False,
    type_name=None,
    evaluation_output_path="evaluation_results",
    concat=True,
    output_all_results_concat_path="all_results_concat",
    judge_model_name="mistralai/Mistral-7B-Instruct-v0.3",
    judge_param_set_number=None,
    prompt_number=1
)

pipeline.run_evaluation(eval_config)

# for running evaluation for all experiments present in the provided directory path
pipeline.run_missing_evaluations_in_directory(path=results)

6. Visualise

Launches the interactive Streamlit dashboard.

pipeline.visualise()

# visualisation process can be stopped by using the following method
pipeline.stop_visualiser()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iqbench-0.1.1.tar.gz (129.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

iqbench-0.1.1-py3-none-any.whl (314.8 kB view details)

Uploaded Python 3

File details

Details for the file iqbench-0.1.1.tar.gz.

File metadata

  • Download URL: iqbench-0.1.1.tar.gz
  • Upload date:
  • Size: 129.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for iqbench-0.1.1.tar.gz
Algorithm Hash digest
SHA256 9b272f11a8c5ee877fe1a0f007216e715b25f5192ddb10dd921b505ae43ed685
MD5 f2d63e091c9646e147d923093d3ec2fa
BLAKE2b-256 fd1988e6b09cd55d771ea27538d25fbfac11d683b9d80993a95def1bda660f9d

See more details on using hashes here.

File details

Details for the file iqbench-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: iqbench-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 314.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for iqbench-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 79428ce2c9a48f20480fc5c8f3e00cf55348e5bc3f9cb7b038e47268a2de13f9
MD5 eae0623c49669f3c2cc7bde2f60db57d
BLAKE2b-256 e92256dffb5cc967799e6d0778315c099365aa29a1d42bd0adf040c3fb4f6b67

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page