Skip to main content

mcsm-benchs: A benchmarking toolbox for Multi-Component Signal Methods.

Project description

Tests codecov Documentation Code style: black

mcsm-benchs: A Toolbox for Benchmarking Multi-Component Signal Analysis Methods

A public, open-source, Python-based toolbox for benchmarking multi-component signal analysis methods, implemented either in Python or MATLAB/Octave.

This toolbox provides a common framework that allows researcher-independent comparisons between methods and favors reproducible research.

Create your own collaborative benchmarks using mcsm-benchs and this GitHub template.

Collaborative benchmarks allow other researchers to add new methods to your benchmark via a pull-request. This is as easy as creating a new .py file with a Python class that wraps a call to your method (it doesn't matter if it is coded in Python, MATLAB or Octave... we welcome all!). Template files are available for this too. Let's make collaborative science easy :).

The GitHub workflows provided in the template can automatically publish a summary report like this of the benchmarks saved in your repository, as well as make interactive online plots and give access to .csv files with the data.

[!TIP] Questions or difficulties using mcsm-benchs?

Please consider leaving an Issue so that we can help you and improve our software :white_check_mark:.

[!TIP] :construction: Do you want to contribute to mcsm-benchs?

Please check our contribution guidelines :white_check_mark:.

Installation using pip

pip install mcsm-benchs

For installation in development mode using poetry check instructions in the documentation.

Documentation

Documentation

Quick-start

Creating a new benchmark

The following simple example shows how to create a new benchmark for comparing your methods. We set task=denoising, meaning that all methods will be compared in terms of reconstruction of the original signal from noise.

Check out examples with other tasks and performance functions in the documentation of mcsm-benchs.

from mcsm_benchs.Benchmark import Benchmark
from mcsm_benchs.SignalBank import SignalBank
# 1. Import (or define) the methods to be compared.
from my_methods import method_1, method_2

# 2. Create a dictionary of the methods to test.
my_methods = { 'Method 1': method_1, 'Method 2': method_2, }

# 3. Create a dictionary of signals:
N = 1024                                    # Length of the signals
sbank = SignalBank(N,)
s1 = sbank.signal_exp_chirp()
s2 = sbank.signal_linear_chirp()
my_signals = {'Signal_1':s1, 'Signal_2':s2, }

# 4. Set the benchmark parameters:
benchmark = Benchmark(task='denoising',
                    N=N, 
                    repetitions = 100,
                    SNRin=[0,10,20],        # SNR in dB.
                    methods=my_methods, 
                    signals=my_signals,
                    verbosity=0
                    )
# 5. Launch the benchmark and save to file
benchmark.run()                        # Run the benchmark.
benchmark.save_to_file('saved_benchmark')   # Give a filename and save to file

Processing and publishing benchmark results

from mcsm_benchs.Benchmark import Benchmark
from mcsm_benchs.ResultsInterpreter import ResultsInterpreter

# 1. Load a benchmark from a file.
benchmark = Benchmark.load('path/to/file/saved_benchmark')

# 2. Create the interpreter
interpreter = ResultsInterpreter(benchmark)

# 3. Get .csv files
interpreter.get_csv_files(path='path/to/csv/files')

# 4. Generate report and interactive figures
interpreter.save_report(path='path/to/report', bars=False)

#5 Or generate interactive plots with plotly
from plotly.offline import iplot
figs = interpreter.get_summary_plotlys(bars=True)
for fig in figs:
    iplot(fig)

If you use the GitHub template for collaborative benchmarks, your results are automatically published if you enable GitHub sites in the repository configuration. Additionally, other researchers will be able to interact with your results, download .csv files with all the benchmark data and even add their own methods to your benchmark via a pull-request.

Related work

Work in progress (2024)

EUSIPCO 2023

Gretsi 2022

More

:pushpin: We use oct2py to run Octave-based methods in Python.

:pushpin: We use matlabengine to run MATLAB-based methods in Python.

:pushpin: We use plotly to create online, interactive plots.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcsm_benchs-0.1.3.tar.gz (47.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcsm_benchs-0.1.3-py3-none-any.whl (50.0 kB view details)

Uploaded Python 3

File details

Details for the file mcsm_benchs-0.1.3.tar.gz.

File metadata

  • Download URL: mcsm_benchs-0.1.3.tar.gz
  • Upload date:
  • Size: 47.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.12.3 Linux/6.8.0-55-generic

File hashes

Hashes for mcsm_benchs-0.1.3.tar.gz
Algorithm Hash digest
SHA256 2995482950a9279c5e096ad2865155fc5a21eaceb67e9bc396af07fdee462046
MD5 7eda8278e95d330042059f3fb9832894
BLAKE2b-256 3644fee712831efc7601c5e5cd733972a20648608bca9fe5dd92d0ec01f2626b

See more details on using hashes here.

File details

Details for the file mcsm_benchs-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: mcsm_benchs-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 50.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.12.3 Linux/6.8.0-55-generic

File hashes

Hashes for mcsm_benchs-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c3428391592d01f93000964ce7b94fd4787e8bea3bf46d2aea80f501316d6b81
MD5 5f9342b6e7bbc4a655ee8cfe899f8880
BLAKE2b-256 cbf4ea267adca98c9322b408126747d8ef18df3530db8f148c0ec63371ac1128

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page