Skip to main content

Significance Analysis for HPO-algorithms performing on multiple benchmarks

Project description

Significance Analysis

PyPI version Python versions License

This package is used to analyse datasets of different HPO-algorithms performing on multiple benchmarks, using a Linear Mixed-Effects Model-based approach.

Note

As indicated with the v0.x.x version number, Significance Analysis is early stage code and APIs might change in the future.

Documentation

Please have a look at our example. The dataset should be a pandas dataframe of the following format:

algorithm benchmark metric optional: budget/prior/...
Algorithm1 Benchmark1 x.xxx 1.0
Algorithm1 Benchmark1 x.xxx 2.0
Algorithm1 Benchmark2 x.xxx 1.0
... ... ... ...
Algorithm2 Benchmark2 x.xxx 2.0

Our function dataset_validator checks for this format.

Installation

Using R, >=4.0.0 install packages: Matrix, emmeans, lmerTest and lme4

Using pip

pip install significance-analysis

Usage for significance testing

  1. Generate data from HPO-algorithms on benchmarks, saving data according to our format.
  2. Build a model with all interesting factors
  3. Do post-hoc testing
  4. Plot the results as CD-diagram

In code, the usage pattern can look like this:

import pandas as pd
from significance_analysis import dataframe_validator, model, cd_diagram


# 1. Generate/import dataset
data = dataframe_validator(pd.read_parquet("datasets/priorband_data.parquet"))

# 2. Build the model
mod = model("value ~ algorithm + (1|benchmark) + prior", data)

# 3. Conduct the post-hoc analysis
post_hoc_results = mod.post_hoc("algorithm")

# 4. Plot the results
cd_diagram(post_hoc_results)

Usage for hypothesis testing

Use the GLRT implementation or our prepared sanity checks to conduct LMEM-based hypothesis testing.

In code:

from significance_analysis import (
    dataframe_validator,
    glrt,
    model,
    seed_dependency_check,
    benchmark_information_check,
    fidelity_check,
)

# 1. Generate/import dataset
data = dataframe_validator(pd.read_parquet("datasets/priorband_data.parquet"))

# 2. Run the preconfigured sanity checks
seed_dependency_check(data)
benchmark_information_check(data)
fidelity_check(data)

# 3. Run a custom hypothesis test, comparing model_1 and model_2
model_1 = model("value ~ algorithm", data)
model_2 = model("value ~ 1", data)
glrt(model_1, model_2)

Usage for metafeature impact analysis

Analyzing the influence, a metafeature has on two algorithms performances.

In code:

from significance_analysis import dataframe_validator, metafeature_analysis

# 1. Generate/import dataset
data = dataframe_validator(pd.read_parquet("datasets/priorband_data.parquet"))

# 2. Run the metafeature analysis
scores = metafeature_analysis(data, ("HB", "PB"), "prior")

For more details and features please have a look at our example.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

significance_analysis-0.2.0.tar.gz (16.2 kB view hashes)

Uploaded Source

Built Distribution

significance_analysis-0.2.0-py3-none-any.whl (14.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page