Distributed hyperparameter optimization made easy
Project description
optuna-distributed
An extension to Optuna which makes distributed hyperparameter optimization easy, and keeps all of the original Optuna semantics. Optuna-distributed can run locally, by default utilising all CPU cores, or can easily scale to many machines in Dask cluster.
Note
Optuna-distributed is still in the early stages of development. While core Optuna functionality is supported, few missing APIs (especially around Optuna integrations) might prevent this extension from being entirely plug-and-play for some users. Bug reports, feature requests and PRs are more than welcome.
Features
- Asynchronous optimization by default. Scales from single machine to many machines in cluster.
- Distributed study walks and quacks just like regular Optuna study, making it plug-and-play.
- Compatible with all standard Optuna storages, samplers and pruners.
- No need to modify existing objective functions.
Installation
pip install optuna-distributed
Optuna-distributed requires Python 3.8 or newer.
Basic example
Optuna-distributed wraps standard Optuna study. The resulting object behaves just like regular study, but optimization process is asynchronous. Depending on setup of Dask client, each trial is scheduled to run on available CPU core on local machine, or physical worker in cluster.
Note
Running distributed optimization requires a Dask cluster with environment closely matching one on the client machine. For more information on cluster setup and configuration, please refer to https://docs.dask.org/en/stable/deploying.html.
import random
import time
import optuna
import optuna_distributed
from dask.distributed import Client
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
# Some expensive model fit happens here...
time.sleep(random.uniform(1.0, 2.0))
return x**2 + y
if __name__ == "__main__":
# client = Client("<your.cluster.scheduler.address>") # Enables distributed optimization.
client = None # Enables local asynchronous optimization.
study = optuna_distributed.from_study(optuna.create_study(), client=client)
study.optimize(objective, n_trials=10)
print(study.best_value)
But there's more! All of the core Optuna APIs, including storages, samplers and pruners are supported! If you'd like to know how Optuna-distributed works, then check out this article on Optuna blog.
What's missing?
- Support for callbacks and Optuna integration modules.
- Study APIs such as
study.stop
can't be called from trial at the moment. - Local asynchronous optimization on Windows machines. Distributed mode is still available.
- Support for
optuna.terminator
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file optuna_distributed-0.7.0.tar.gz
.
File metadata
- Download URL: optuna_distributed-0.7.0.tar.gz
- Upload date:
- Size: 31.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9c89af5811787eaf649bb9fb9eea6fe3325e8e5ead4dc2435e58d8d3c90888e4 |
|
MD5 | 1c6b19adb92ddc83992ef67722dd144a |
|
BLAKE2b-256 | 8482fe8a2a4ea234d09e8fe0e819d2c8ebe5cffb8412c5655dba108f40fd964f |
File details
Details for the file optuna_distributed-0.7.0-py3-none-any.whl
.
File metadata
- Download URL: optuna_distributed-0.7.0-py3-none-any.whl
- Upload date:
- Size: 30.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7165b79fbd353c61424565d381d0e8cee3dab0fe9a32f3b40704da7fdafeb102 |
|
MD5 | d65163c0bf943e8d8764a00b670603dc |
|
BLAKE2b-256 | f1241f7abf13947fe5db9e077a2054f827028262866ae76e26f79055dbe64e17 |