Skip to main content

Benchmarking tool for Eclipse Aidge - validate model correctness, precision, and performance across multiple libraries.

Project description

:warning: This repository is still under active construction :construction:. It is not functional yet.

Aidge logo

Pipeline status Python coverage

:construction: Eclipse Aidge: Benchmarking Module

Benchmarking tool for Eclipse Aidge – validate model correctness, precision, and performance across multiple libraries.

Overview

This module benchmarks ML models across Eclipse Aidge modules (CPU, CUDA, exportCPP), as well as external libraries (Torch, ONNXRuntime). It measures:

  • Correctness (output similarity)
  • Precision
  • Inference time (performance)

Useful links

Quick start

System Requirements

  • python >= 3.10
  • numpy >= 1.22.0
  • aidge_core >= 0.7.0

Any benchmarked library might require additional dependencies, notably onnx for external libraries.

Plotting results requires matplolib package.

Installation

See the main Aidge repository for a general instructions on Aidge installation. Once these steps are completed, you can install the benchmarking library two ways:

1. Using pip (Recommended) (only available on Linux)

pip install aidge-benchmark

2. Build from source

System Requirements

  • CMake >= 3.18
pip install . -v

Verify installation

python -c "import aidge_core; import aidge_benchmark; create_benchmark_from_dict({})"

Usage

Command-line interface (CLI)

This module comes with a callable script: aidge_benchmark

aidge_benchmark --config-file conv2d.json -c -t -m aidge_export_cpp aidge_backend_cpu aidge_backend_cuda torch onnxruntime --save-directory benchmark_results
aidge_benchmark --help

More details on the tutorials

Python API

Here is an example script to generate a result and time comparison of several libraries with a reference library for the convolution operation and save results as an svg image.

import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path

import aidge_benchmark as bench
from aidge_onnx import convert_aidge_to_onnx
import aidge_backend_cpu  # ensures backend is registered
import aidge_export_cpp  # ensures backend is registered

# List of backends we want to test against the reference implementation
TESTED_LIBS: list[str] = ["torch", "aidge_backend_cpu", "aidge_export_cpp"]
REFERENCE_LIB: str = "onnxruntime"
# Benchmark test cases description file
BENCHMARK_FILE = Path("conv2d.json")
N_ITER = 10


# Helper function
def format_time_stats(times: list[float]) -> str:
    """Return mean ± std formatted in ms (scientific notation)."""
    mean, std = np.mean(times), np.std(times)
    return f"{mean*1e3:.2e} ± {std*1e3:.2e} ms"


def run_single_benchmark(cfg):
    """
    Run a benchmark for a single configuration.
    Returns a dictionary mapping backend name -> list of timings (seconds).
    """
    # Create random input tensors according to the spec
    inputs = cfg.generate_inputs()

    # Build ONNX and Aidge model description from 'cfg' described with ONNX format
    model_aidge = cfg.as_model(input_arrays=inputs)
    model = convert_aidge_to_onnx(
            model_aidge, f"test-model_{cfg.operation}_{label}", opset=21, ir_version=10
        )
    # update input names for ONNX
    for idx, named_tensor in enumerate(inputs):
        if idx < len(model.graph.input):
            for node in model.graph.node:
                for i, inp in enumerate(node.input):
                    if inp == model.graph.input[idx].name:
                        node.input[i] = named_tensor.name
            model.graph.input[idx].name = named_tensor.name


    # /!\ specific to ONNX format
    # Determine how many inputs are actual data (excluding initializers)
    nb_data_inputs = cfg.format.metadata["initializer_rank"]

    onnx_param = {"model": model, "inputs": inputs[:nb_data_inputs]}
    aidge_param = {"model": model_aidge, "inputs": inputs[:nb_data_inputs]}

    backend_params = {
        **{lib: onnx_param for lib in ["torch", "onnxruntime"]},
        **{lib: aidge_param for lib in ["aidge_backend_cpu", "aidge_export_cpp"]},
    }

    # Reference run
    ref_out = bench.compute_output(REFERENCE_LIB, **backend_params[REFERENCE_LIB])
    ref_times = bench.measure_inference_time(
        REFERENCE_LIB, **backend_params[REFERENCE_LIB], nb_iterations=N_ITER
    )
    print(f"\t{REFERENCE_LIB:<20}✔️\t{format_time_stats(ref_times)}")
    results = {REFERENCE_LIB: ref_times}

    # Test each backend and compare results
    for lib in TESTED_LIBS:
        out = bench.compute_output(lib, **backend_params[lib])
        is_equal = bench.utils.compare_tensors(ref_out, out, verbose=False)
        times = bench.measure_inference_time(
            lib, **backend_params[lib], nb_iterations=N_ITER
        )
        print(f"\t{lib:<20}{'✔️' if is_equal else '❌'}\t{format_time_stats(times)}")
        results[lib] = times
    return results

if __name__ == "__main__":
    scheme, default_cfg = bench.create_benchmark_from_json(str(BENCHMARK_FILE))
    timing_results = {}
    for  label, cfg in scheme:
        print(f"\n- {label}")

        # Merge test config with default settings
        full_cfg = default_cfg.override_with(cfg)
        timing_results[label] = run_single_benchmark(full_cfg)

    fig, _ = bench.visualize.ratio_plot(timing_results, REFERENCE_LIB)
    fig.savefig(f"{str(BENCHMARK_FILE)}_performances.svg")
    print(f"\nSaved: '{str(BENCHMARK_FILE)}_performances.svg'")

License

Aidge is licensed under Eclipse Public License 2.0, as found in the LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aidge_benchmark-0.9.0-py3-none-any.whl (97.3 kB view details)

Uploaded Python 3

File details

Details for the file aidge_benchmark-0.9.0-py3-none-any.whl.

File metadata

File hashes

Hashes for aidge_benchmark-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3ac2f4da307c2bf815f37046bf113c2c07b85c122ca60eee2e1746815970f617
MD5 4c315c4d6d59915e99d0371ddf3c05a8
BLAKE2b-256 9d8d62e2b178ce7d18ad32d1d00ef5e67f621dba2ce11b1e9c4e97cfe132adba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page