Significance Analysis for HPO-algorithms performing on multiple benchmarks
Project description
Significance Analysis
This package is used to analyse datasets of different HPO-algorithms performing on multiple benchmarks, using a Linear Mixed-Effects Model-based approach.
Note
As indicated with the v0.x.x
version number, Significance Analysis is early stage code and APIs might change in the future.
Documentation
Please have a look at our example. The dataset should be a pandas dataframe of the following format:
algorithm | benchmark | metric | optional: budget/prior/... |
---|---|---|---|
Algorithm1 | Benchmark1 | x.xxx | 1.0 |
Algorithm1 | Benchmark1 | x.xxx | 2.0 |
Algorithm1 | Benchmark2 | x.xxx | 1.0 |
... | ... | ... | ... |
Algorithm2 | Benchmark2 | x.xxx | 2.0 |
Our function dataset_validator
checks for this format.
Installation
Using R, >=4.0.0 install packages: Matrix, emmeans, lmerTest and lme4
Using pip
pip install significance-analysis
Usage for significance testing
- Generate data from HPO-algorithms on benchmarks, saving data according to our format.
- Build a model with all interesting factors
- Do post-hoc testing
- Plot the results as CD-diagram
In code, the usage pattern can look like this:
import pandas as pd
from significance_analysis import dataframe_validator, model, cd_diagram
# 1. Generate/import dataset
data = dataframe_validator(pd.read_parquet("datasets/priorband_data.parquet"))
# 2. Build the model
mod = model("value ~ algorithm + (1|benchmark) + prior", data)
# 3. Conduct the post-hoc analysis
post_hoc_results = mod.post_hoc("algorithm")
# 4. Plot the results
cd_diagram(post_hoc_results)
Usage for hypothesis testing
Use the GLRT implementation or our prepared sanity checks
to conduct LMEM-based hypothesis testing.
In code:
from significance_analysis import (
dataframe_validator,
glrt,
model,
seed_dependency_check,
benchmark_information_check,
fidelity_check,
)
# 1. Generate/import dataset
data = dataframe_validator(pd.read_parquet("datasets/priorband_data.parquet"))
# 2. Run the preconfigured sanity checks
seed_dependency_check(data)
benchmark_information_check(data)
fidelity_check(data)
# 3. Run a custom hypothesis test, comparing model_1 and model_2
model_1 = model("value ~ algorithm", data)
model_2 = model("value ~ 1", data)
glrt(model_1, model_2)
Usage for metafeature impact analysis
Analyzing the influence, a metafeature has on two algorithms performances.
In code:
from significance_analysis import dataframe_validator, metafeature_analysis
# 1. Generate/import dataset
data = dataframe_validator(pd.read_parquet("datasets/priorband_data.parquet"))
# 2. Run the metafeature analysis
scores = metafeature_analysis(data, ("HB", "PB"), "prior")
For more details and features please have a look at our example.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for significance_analysis-0.2.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5d24c79ae44e6d8efab771546153ebfd2ae1940338a1a82ef1d7f7e5df439f5b |
|
MD5 | 064c8341df1e53f7578f086fa2b3da84 |
|
BLAKE2b-256 | a257dfb0c5df9d8251a961367ce9bf50033b9ab4f305e5b3b49a23630a1f69b6 |
Hashes for significance_analysis-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 84bb40fc43bc795c12a34efd8c309f0a76e34876996bd80788830e59c9f79849 |
|
MD5 | 4cc418b7558595303c776c72e43e30fd |
|
BLAKE2b-256 | 478336e91e34e9c9d8929d686bd41be947661093feb998e33aaa5f6773a478b2 |