Skip to main content

Benchmarking tool for Eclipse Aidge - validate model correctness, precision, and performance across multiple libraries.

Project description

Aidge logo

EPL 2.0 Examples PyPi Examples Documentation Status GitLab Contributors Open GitLab Issues Closed GitLab Issues

🚧 Aidge Benchmark

[!WARN] This repository is still under active construction.

This module benchmarks ML models across Eclipse Aidge modules (CPU, CUDA, exportCPP), as well as external libraries (Torch, ONNXRuntime). It measures:

  • Correctness (output similarity)
  • Precision
  • Inference time (performance)

Useful links

Quick start

System Requirements

  • python >= 3.10
  • numpy >= 1.22.0
  • aidge_core >= 0.7.0

Any benchmarked library might require additional dependencies, notably onnx for external libraries.

Plotting results requires matplolib package.

pip install aidge-benchmark
python -c "import aidge_core; import aidge_benchmark; create_benchmark_from_dict({})"

🛠 Build from Source

Prerequisite (in addition to previous one):

1. Python installation using setup scripts

Environment Python Development
Windows .\setup.ps1 -Modules backend_cpu -Tests
Unix ./setup.sh -m backend_cpu --tests

[!TIP] Use Get-Help setup.ps1 (Windows) or ./setup.sh -h (Unix) for full documentation.

2. Python Installation using pip

Run these commands from the aidge_benchmark/ directory:

# Standard install
pip install . -v

# Install with testing dependencies
pip install .[test] -v && pytest

Editable Install (Experimental)

Use this for real-time development without re-installing.

pip install --no-build-isolation -ve . --config-settings=editable.rebuild=true -Cbuild-dir=build

Usage

Command-line interface (CLI)

This module comes with a callable script: aidge_benchmark

aidge_benchmark --config-file conv2d.json -c -t -m aidge_export_cpp aidge_backend_cpu aidge_backend_cuda torch onnxruntime --save-directory benchmark_results
aidge_benchmark --help

More details on the tutorials

Python API

Here is an example script to generate a result and time comparison of several libraries with a reference library for the convolution operation and save results as an svg image.

import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path

import aidge_benchmark as bench
from aidge_onnx import convert_aidge_to_onnx
import aidge_backend_cpu  # ensures backend is registered
import aidge_export_cpp  # ensures backend is registered

# List of backends we want to test against the reference implementation
TESTED_LIBS: list[str] = ["torch", "aidge_backend_cpu", "aidge_export_cpp"]
REFERENCE_LIB: str = "onnxruntime"
# Benchmark test cases description file
BENCHMARK_FILE = Path("conv2d.json")
N_ITER = 10


# Helper function
def format_time_stats(times: list[float]) -> str:
    """Return mean ± std formatted in ms (scientific notation)."""
    mean, std = np.mean(times), np.std(times)
    return f"{mean*1e3:.2e} ± {std*1e3:.2e} ms"


def run_single_benchmark(cfg):
    """
    Run a benchmark for a single configuration.
    Returns a dictionary mapping backend name -> list of timings (seconds).
    """
    # Create random input tensors according to the spec
    inputs = cfg.generate_inputs()

    # Build ONNX and Aidge model description from 'cfg' described with ONNX format
    model_aidge = cfg.as_model(input_arrays=inputs)
    model = convert_aidge_to_onnx(
            model_aidge, f"test-model_{cfg.operation}_{label}", opset=21, ir_version=10
        )
    # update input names for ONNX
    for idx, named_tensor in enumerate(inputs):
        if idx < len(model.graph.input):
            for node in model.graph.node:
                for i, inp in enumerate(node.input):
                    if inp == model.graph.input[idx].name:
                        node.input[i] = named_tensor.name
            model.graph.input[idx].name = named_tensor.name


    # /!\ specific to ONNX format
    # Determine how many inputs are actual data (excluding initializers)
    nb_data_inputs = cfg.format.metadata["initializer_rank"]

    onnx_param = {"model": model, "inputs": inputs[:nb_data_inputs]}
    aidge_param = {"model": model_aidge, "inputs": inputs[:nb_data_inputs]}

    backend_params = {
        **{lib: onnx_param for lib in ["torch", "onnxruntime"]},
        **{lib: aidge_param for lib in ["aidge_backend_cpu", "aidge_export_cpp"]},
    }

    # Reference run
    ref_out = bench.compute_output(REFERENCE_LIB, **backend_params[REFERENCE_LIB])
    ref_times = bench.measure_inference_time(
        REFERENCE_LIB, **backend_params[REFERENCE_LIB], nb_iterations=N_ITER
    )
    print(f"\t{REFERENCE_LIB:<20}✔️\t{format_time_stats(ref_times)}")
    results = {REFERENCE_LIB: ref_times}

    # Test each backend and compare results
    for lib in TESTED_LIBS:
        out = bench.compute_output(lib, **backend_params[lib])
        is_equal = bench.utils.compare_tensors(ref_out, out, verbose=False)
        times = bench.measure_inference_time(
            lib, **backend_params[lib], nb_iterations=N_ITER
        )
        print(f"\t{lib:<20}{'✔️' if is_equal else '❌'}\t{format_time_stats(times)}")
        results[lib] = times
    return results

if __name__ == "__main__":
    scheme, default_cfg = bench.create_benchmark_from_json(str(BENCHMARK_FILE))
    timing_results = {}
    for  label, cfg in scheme:
        print(f"\n- {label}")

        # Merge test config with default settings
        full_cfg = default_cfg.override_with(cfg)
        timing_results[label] = run_single_benchmark(full_cfg)

    fig, _ = bench.visualize.ratio_plot(timing_results, REFERENCE_LIB)
    fig.savefig(f"{str(BENCHMARK_FILE)}_performances.svg")
    print(f"\nSaved: '{str(BENCHMARK_FILE)}_performances.svg'")

License

Aidge is licensed under Eclipse Public License 2.0, as found in the LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aidge_benchmark-0.9.1.post2-py3-none-any.whl (101.2 kB view details)

Uploaded Python 3

File details

Details for the file aidge_benchmark-0.9.1.post2-py3-none-any.whl.

File metadata

File hashes

Hashes for aidge_benchmark-0.9.1.post2-py3-none-any.whl
Algorithm Hash digest
SHA256 ae50dff2e50b5d22b00dc10e73d614f38ee769f5c987fad74db4321cbb4dc71b
MD5 1861533ae55a749373771666245c3686
BLAKE2b-256 fb5d07cba1bd6f34e8127cf92c20925151a53bf60ea272a216f59c3e88d9df0a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page