Skip to main content

Hydra Optuna sweeper with MLflow parent-run logging

Project description

Hydra Optuna MLflow Sweeper

Hydra Optuna MLflow Sweeper is a general-purpose Hydra sweeper plugin for hyperparameter optimization with Optuna.

This project is based on the original Hydra Optuna Sweeper plugin by Toshihiko Yanase: https://github.com/toshihikoyanase/hydra-optuna-sweeper/tree/main

What This Package Adds

In addition to Optuna-based sweeping, this package adds:

  • MLflow study and trial hierarchy logging, including parent run propagation to trial jobs.
  • Restart behavior for persistent studies with restart_mode:
    • resume: continue an existing study in the same storage.
    • fresh: create a new timestamped study name while keeping the same storage backend.
  • Support for persistent SQLite Optuna storage (for example, sqlite:///logs/optuna/mlp_search.db).

Installation

Using pip:

pip install -e .

Using uv:

uv sync

Quick Usage

Set the sweeper in your Hydra config:

defaults:
  - override /hydra/sweeper: mlflow_optuna

The sweeper injects these runtime overrides for each trial:

  • +mlflow_parent_run_id
  • +optuna_trial_number

Your training code can use these values to attach nested runs to the study parent run.

mlflow_study_run_name controls the top-level study run name created by the sweeper. When set, that explicit value is used instead of the resolved study name.

Recommended Config Example

Below is a production-style example adapted from your config:

# @package _global_
defaults:
  - override /hydra/sweeper: mlflow_optuna

# Metric returned by train() (unused by CV)
optimized_metric: "val/loss"

# Vary the CV split seed across trials
split_seed: ${hydra:job.num}

log_system_metrics: false
save_checkpoints: false

hydra:
  mode: "MULTIRUN"
  sweeper:
    optuna_config:
      # Persistent study DB. Re-running the same command with resume
      # continues the same study.
      storage: sqlite:///logs/optuna/mlp_search.db
      study_name: mlp_search
      load_if_exists: true

      # resume: keep same study_name
      # fresh: append timestamp suffix to create a new study in same DB
      restart_mode: resume

      # Top-level MLflow run name (defaults to study_name when null)
      mlflow_study_run_name: null

      # Set n_jobs > 1 only when your hardware can safely parallelize trials
      n_jobs: 1
      direction: minimize
      n_trials: 50

      sampler:
        _target_: optuna.samplers.TPESampler
        seed: 42

      params:
        # Architecture
        model.model.hidden_size: choice(12, 16, 20, 24, 28, 32)
        model.model.num_layers: choice(2, 3, 4, 5)
        model.model.activation: choice("relu", "softplus", "silu")
        model.model.dropout: choice(0.0, 0.1, 0.2, 0.3, 0.4, 0.5)

        # Optimization
        model.weight_decay: choice(0, 1e-5, 1e-4, 1e-3)

        # Batch size affects throughput and generalization
        datamodule.batch_size: choice(1024, 2048, 4096)

  # Keep sweep directory simple to avoid unresolved interpolation issues
  sweep:
    dir: logs/multirun/${now:%Y-%m-%d_%H-%M-%S}
    subdir: ${hydra.job.num}

Minimal Example App

A minimal runnable example is provided in example/.

python example/quadratic.py -m 'x=interval(-5.0, 5.0)' 'y=interval(0.0, 10.0)'

Train-Side MLflow Run Setup

In trial jobs (for example train.py), consume mlflow_parent_run_id injected by the sweeper to attach each training run under the study run:

from omegaconf import DictConfig
import mlflow


def _start_mlflow_run(cfg: DictConfig):
  """Start an MLflow run using config values and enable autologging."""
  logger_cfg = cfg.trainer.logger
  tracking_uri = logger_cfg.tracking_uri
  experiment_name = cfg.experiment_path
  run_name = cfg.get("run_name")
  parent_run_id = cfg.get("mlflow_parent_run_id")

  mlflow.set_tracking_uri(tracking_uri)
  mlflow.set_experiment(experiment_name)

  start_run_kwargs = {"run_name": run_name}
  if parent_run_id:
    start_run_kwargs["parent_run_id"] = parent_run_id
  return mlflow.start_run(**start_run_kwargs)

With restart_mode: resume, rerunning the same sweep command with the same study_name and storage backend continues the existing Optuna study.

Contributing

We welcome contributions! To get started:

  1. Set up the development environment:

    uv sync
    source .venv/bin/activate
    
  2. Install pre-commit hooks:

    uv run pre-commit install
    
  3. Make your changes and run linting/tests:

    uv run pre-commit run --all-files
    uv run pytest
    
  4. Submit a pull request with a clear description of your changes.

Please ensure your code follows the project's style guidelines (enforced by ruff and pre-commit).

License

This project is licensed under the MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hydra_optuna_mlflow_sweeper-0.1.0.tar.gz (12.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hydra_optuna_mlflow_sweeper-0.1.0-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file hydra_optuna_mlflow_sweeper-0.1.0.tar.gz.

File metadata

File hashes

Hashes for hydra_optuna_mlflow_sweeper-0.1.0.tar.gz
Algorithm Hash digest
SHA256 57f05b0676f755dd9e96c920a5e8a464d00d4392eed7892ebd7bec0521e92427
MD5 1884f11655a7ab35b1874526400cbea5
BLAKE2b-256 bfc7a1bceaa0ab6e66791c1a2ebc688f83c2b0d09902dd0499e5844c88eaff0d

See more details on using hashes here.

Provenance

The following attestation bundles were made for hydra_optuna_mlflow_sweeper-0.1.0.tar.gz:

Publisher: publish-pypi.yml on amari97/hydra-optuna-mlflow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hydra_optuna_mlflow_sweeper-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for hydra_optuna_mlflow_sweeper-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9de83722bfe18028ea0ac373a010396381d1b0af6b2d667651314cc233718181
MD5 25283448c1bc99369217c4301f9e58e6
BLAKE2b-256 f978da4196c48051fcd241387eaae20460fb9afc08b2ea3dc10d6ec9acf6f374

See more details on using hashes here.

Provenance

The following attestation bundles were made for hydra_optuna_mlflow_sweeper-0.1.0-py3-none-any.whl:

Publisher: publish-pypi.yml on amari97/hydra-optuna-mlflow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page