Skip to main content

LLM-guided ML optimization — point it at a training script, it reads the curves and designs better models

Project description

neuropt

Three robot researchers designing neural network architectures

An LLM reads your training curves and designs your next experiment.


Point it at a training script, let it run overnight. The LLM sees full per-epoch train/val curves, spots overfitting, and proposes what to try next — like a research assistant who never sleeps and actually reads the loss plots.

vs Optuna and random search

Benchmark: neuropt vs Optuna vs Random

Same 15-eval budget on two tasks: CNN architecture search (14 params) and XGBoost tuning (9 params, 7-class Covertype). These results use Claude Haiku 4.5 (the smallest and cheapest of their 4.5 models). We expect even stronger results with Sonnet or Opus. Optuna's TPE was configured with n_startup_trials=3 for a fair comparison (default is 10, which would make it purely random for most of the budget).

In a separate 200-eval run on the CNN task, neuropt again beat the others within 15 evals and kept improving — reaching 0.337 val loss by eval 200 (vs 0.454 for Optuna's best at 15). Local Qwen backend (experimental) also beat Optuna at 15 evals (0.440 vs 0.454) despite a 40% JSON parse failure rate.

Quick start

pip install neuropt[llm]
export ANTHROPIC_API_KEY="sk-ant-..."

Option 1 — define what to search over:

# train.py
search_space = {
    "lr": (1e-4, 1e-1),                    # auto-detects log-scale
    "hidden_dim": (32, 512),                # auto-detects integer
    "activation": ["relu", "gelu", "silu"], # categorical
}

def train_fn(config):
    model = build_my_model(config["hidden_dim"], config["activation"])
    # ... train, return per-epoch losses for smarter LLM decisions ...
    return {"score": val_loss, "train_losses": [...], "val_losses": [...]}

Option 2 — just give it a model, we figure out the rest:

# train.py
model = torchvision.models.resnet18(num_classes=10)  # neuropt introspects this

def train_fn(config):
    m = config["model"].to("cuda")  # deep copy with modifications applied
    # ... train ...
    return {"score": val_loss, "train_losses": [...], "val_losses": [...]}

Then run:

neuropt run train.py

Runs until Ctrl+C. Crash-safe, resumable. Works in notebooks too:

from neuropt import ArchSearch

search = ArchSearch(train_fn=train_fn, search_space=search_space, backend="claude")
search.run(max_evals=50)

Documentation

See the full documentation for:

Installation

pip install neuropt                # core
pip install neuropt[llm]           # + Claude API (recommended)
pip install neuropt[llm-openai]    # + OpenAI API
pip install neuropt[all]           # everything

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuropt-0.5.0.tar.gz (27.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuropt-0.5.0-py3-none-any.whl (29.9 kB view details)

Uploaded Python 3

File details

Details for the file neuropt-0.5.0.tar.gz.

File metadata

  • Download URL: neuropt-0.5.0.tar.gz
  • Upload date:
  • Size: 27.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.6

File hashes

Hashes for neuropt-0.5.0.tar.gz
Algorithm Hash digest
SHA256 471c6fc938a72dfc80976efae7c920467cad46b9122b1d136de8d0a944523d2c
MD5 2d4ceedab78bfbe562eb3f31f84d6556
BLAKE2b-256 d7e0790050206029f5becc0f385b3b66492bcfb2072dbf7616f1e38375257df1

See more details on using hashes here.

File details

Details for the file neuropt-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: neuropt-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 29.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.6

File hashes

Hashes for neuropt-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 50574d6a9aa8a99f83f9415203ff0519347deeb2ef3974cdc70ffb6a2c182b68
MD5 83a4e36d519797f4f7e550b4be9969eb
BLAKE2b-256 1196b1c1e9a3954a573813aa081b312bb96dc0437001f3182c02b3d87cc7bacf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page