Skip to main content

Active Learning Pipelines Benchmark

Project description

License Coverage Status Tests Read the Docs

PyPI Version PyPI status Code Style

ALPBench: A Benchmark for Active Learning Pipelines on Tabular Data

ALPBench is a Python package for the specification, execution, and performance monitoring of active learning pipelines (ALP) consisting of a learning algorithm and a query strategy for real-world tabular classification tasks. It has built-in measures to ensure evaluations are done reproducibly, saving exact dataset splits and hyperparameter settings of used algorithms. In total, ALPBench consists of 86 real-world tabular classification datasets and 5 active learning settings, yielding 430 active learning problems. However, the benchmark allows for easy extension such as implementing your own learning algorithm and/or query strategy and benchmark it against existing approaches.

🛠️ Install

ALPBench is intended to work with Python 3.10 and above.

# The base package can be installed via pip:
pip install alpbench

# Alternatively, you can install the full package via pip:
pip install alpbench[full]

# Or you can install the package from source:
git clone https://github.com/ValentinMargraf/ActiveLearningPipelines.git
cd ActiveLearningPipelines
conda create --name alpbench python=3.10
conda activate alpbench

# Install for usage (without TabNet and TabPFN)
pip install -r requirements.txt

# Install for usage (with TabNet and TabPFN)
pip install -r requirements_full.txt

Documentation at https://activelearningpipelines.readthedocs.io/en/latest/

⭐ Quickstart

You can use ALPBench in different ways. There already exist quite some learners and query strategies that can be run through accessing them with their name, as can be seen in the minimal example below. In the ALP.pipeline module you can also implement your own (new) query strategies.

📈 Fit an Active Learning Pipeline

Fit an ALP on dataset with openmlid 31, using a random forest and margin sampling. You can find similar example code snippets in examples/.

from sklearn.metrics import accuracy_score

from alpbench.benchmark.BenchmarkConnector import DataFileBenchmarkConnector
from alpbench.evaluation.experimenter.DefaultSetup import ensure_default_setup
from alpbench.pipeline.ALPEvaluator import ALPEvaluator

# create benchmark connector and establish database connection
benchmark_connector = DataFileBenchmarkConnector()

# load some default settings and algorithm choices
ensure_default_setup(benchmark_connector)

evaluator = ALPEvaluator(benchmark_connector=benchmark_connector,
                         setting_name="small", openml_id=31, sampling_strategy_name="margin", learner_name="rf_gini")
alp = evaluator.fit()

# fit / predict and evaluate predictions
X_test, y_test = evaluator.get_test_data()
y_hat = alp.predict(X=X_test)
print("final test acc", accuracy_score(y_test, y_hat))

>> final
test
acc
0.7181818181818181

Changelog

v0.1.0 (2024-06-13)

Initial release

  • pipeline can be used to combine learning algorithms and query strategies into active learning pipelines
  • evaluation provides tools to evaluate active learning pipelines
  • benchmark monitors the performance of active learning pipelines over time and store results in a database

v0.1.1 (2024-06-14)

Initial release

  • extra code for tabnet does no longer need to be included from the repo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alpbench-0.1.1.tar.gz (82.0 kB view hashes)

Uploaded Source

Built Distribution

alpbench-0.1.1-py3-none-any.whl (93.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page