Skip to main content

CARP-S: Benchmarking N Optimizers on M Benchmarks

Project description

Logo

CARP-S

Welcome to CARP-S! This repository contains a benchmarking framework for optimizers. It allows flexibly combining optimizers and benchmarks via a simple interface, and logging experiment results and trajectories to a database.

The main topics of this README are:

For more details on CARP-S, please have a look at the documentation.

Installation

Installation from PyPI

To install CARP-S, you can simply use pip:

conda create -n carps python=3.11
conda activate carps
pip install carps

Additionally, you need to install the requirements for the benchmark and optimizer that you want to use. For example, if you want to use the SMAC2.0 optimizer and the BBOB benchmark, you need to install the requirements for both of them via:

pip install carps[smac,bbob]

All possible install options for benchmarks are:

dummy,bhob,hpob,mfpbench,pymoo,yahpo

All possible install options for optimizers are:

dummy,dehb,hebo,nevergrad,optuna,skopt,smac,smac14,synetune

Please note that installing all requirements for all benchmarks and optimizers in a single environment will not be possible due to conflicting dependencies.

Installation from Source

If you want to install from source, you can clone the repository and install CARP-S via:

git clone https://github.com/AutoML/CARP-S.git
cd CARP-S
conda create -n carps python=3.11
conda activate carps

# Install for usage
pip install .

For installing the requirements for the optimizer and benchmark, you can then use the following command:

pip install ".[smac,bbob]"

If you want to install CARP-S for development, you can use the following command:

make install-dev

Additional Steps for Benchmarks

For HPOBench, it is necessary to install the requirements via:

bash container_recipes/benchmarks/HPOBench/install_HPOBench.sh

For some benchmarks, it is necessary to download data, such as surrogate models, in order to run the benchmark:

  • For HPOB, you can download the surrogate benchmarks with

    bash container_recipes/benchmarks/HPOB/download_data.sh
    
  • For MFPBench, you can download the surrogate benchmarks with

    bash container_recipes/benchmarks/MFPBench/download_data.sh
    
  • For YAHPO, you can download the required surrogate benchmarks and meta-data with

    bash container_recipes/benchmarks/YAHPO/prepare_yahpo.sh
    

Minimal Example

Once the requirements for both an optimizer and a benchmark, e.g. SMAC2.0 and BBOB, are installed, you can run one of the following minimal examples to benchmark SMAC2.0 on BBOB directly with Hydra:

# Run SMAC BlackBoxFacade on certain BBOB problem
python -m carps.run +optimizer/smac20=blackbox +problem/BBOB=cfg_4_1_4_0 seed=1 task.n_trials=25

# Run SMAC BlackBoxFacade on all available BBOB problems for 10 seeds
python -m carps.run +optimizer/smac20=blackbox '+problem/BBOB=glob(*)' 'seed=range(1,11)' -m

For the second command, the Hydra -m (or --multirun) option indicates that multiple runs will be performed over a range of parameter values. In this case, it's indicating that the benchmarking should be run for all available BBOB problems (+problem/BBOB=glob(*)) and for 10 different seed values (seed=range(1,11)).

Commands

You can run a certain problem and optimizer combination directly with Hydra via:

python -m carps.run +problem=... +optimizer=... seed=... -m

Another option is to fill the database with all possible combinations of problems and optimizers you would like to run:

python -m carps.container.create_cluster_configs +problem=... +optimizer=... -m

Then, run them from the database with:

python -m carps.run_from_db 

To check whether any runs are missing, you can use the following command. It will create a file runcommands_missing.sh containing the missing runs:

python -m carps.utils.check_missing <rundir>

To collect all run data generated by the file logger into csv files, use the following command:

python -m carps.analysis.gather_data <rundir>

The csv files are then located in <rundir>. logs.csv contain the trial info and values and logs_cfg.csv contain the experiment configuration. The experiments can be matched via the column experiment_id.

Experiments with error status (or any other status) can be reset via:

python -m carps.utils.database.reset_experiments

Adding a new Optimizer or Benchmark

For instructions on how to add a new optimizer or benchmark, please refer to the contributing guidelines for benchmarks and optimizers.

Evaluation Results

For each scenario (blackbox, multi-fidelity, multi-objective and multi-fidelity-multi-objective) and set (dev and test), we run selected optimizers and provide the data. Here we provide the link to the meta data that contains the detailed optimization setting for each run
and the running results that records the running results of each optimization-benchmark combination.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

carps-0.1.1.tar.gz (641.6 kB view details)

Uploaded Source

File details

Details for the file carps-0.1.1.tar.gz.

File metadata

  • Download URL: carps-0.1.1.tar.gz
  • Upload date:
  • Size: 641.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.8

File hashes

Hashes for carps-0.1.1.tar.gz
Algorithm Hash digest
SHA256 80d83249b6d9cdd22edf6c5fd7f7ec8f7f724b25cdbbe65ba761d4a444526b44
MD5 9ead9a6cda725055b2c32d386d7780ee
BLAKE2b-256 599172de7456504851ed61fbf55eb44da9d8c6ef589070908f3eeebb18b28a46

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page