Skip to main content

LLM-guided ML optimization — point it at a training script, it reads the curves and designs better models

Project description

neuropt

Three robot researchers designing neural network architectures

An LLM reads your training curves and designs your next experiment.


Point it at a training script, let it run overnight. The LLM sees full per-epoch train/val curves, spots overfitting, and proposes what to try next — like a research assistant who never sleeps and actually reads the loss plots.

vs Optuna and random search

Benchmark: neuropt vs Optuna vs Random

Same 15-eval budget on two tasks: CNN architecture search (14 params) and XGBoost tuning (9 params, 7-class Covertype). These results use Claude Haiku 4.5 (the smallest and cheapest of their 4.5 models). We expect even stronger results with Sonnet or Opus. Optuna's TPE was configured with n_startup_trials=3 for a fair comparison (default is 10, which would make it purely random for most of the budget).

In a separate 200-eval run on the CNN task, neuropt again beat the others within 15 evals and kept improving — reaching 0.337 val loss by eval 200 (vs 0.454 for Optuna's best at 15). Local Qwen backend (experimental) also beat Optuna at 15 evals (0.440 vs 0.454) despite a 40% JSON parse failure rate.

Quick start

pip install neuropt[llm]
export ANTHROPIC_API_KEY="sk-ant-..."

Option 1 — define what to search over:

# train.py
search_space = {
    "lr": (1e-4, 1e-1),                    # auto-detects log-scale
    "hidden_dim": (32, 512),                # auto-detects integer
    "activation": ["relu", "gelu", "silu"], # categorical
}

def train_fn(config):
    model = build_my_model(config["hidden_dim"], config["activation"])
    # ... train, return per-epoch losses for smarter LLM decisions ...
    return {"score": val_loss, "train_losses": [...], "val_losses": [...]}

Option 2 — just give it a model, we figure out the rest:

# train.py
model = torchvision.models.resnet18(num_classes=10)  # neuropt introspects this

def train_fn(config):
    m = config["model"].to("cuda")  # deep copy with modifications applied
    # ... train ...
    return {"score": val_loss, "train_losses": [...], "val_losses": [...]}

Then run:

neuropt run train.py

Runs until Ctrl+C. Crash-safe, resumable. Works in notebooks too:

from neuropt import ArchSearch

search = ArchSearch(train_fn=train_fn, search_space=search_space, backend="claude")
search.run(max_evals=50)

Documentation

See the full documentation for:

Installation

pip install neuropt                # core
pip install neuropt[llm]           # + Claude API (recommended)
pip install neuropt[llm-openai]    # + OpenAI API
pip install neuropt[all]           # everything

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuropt-0.5.1.tar.gz (27.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuropt-0.5.1-py3-none-any.whl (30.1 kB view details)

Uploaded Python 3

File details

Details for the file neuropt-0.5.1.tar.gz.

File metadata

  • Download URL: neuropt-0.5.1.tar.gz
  • Upload date:
  • Size: 27.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.6

File hashes

Hashes for neuropt-0.5.1.tar.gz
Algorithm Hash digest
SHA256 e639b643658571c48c7a33d4fdb51940a07779b3e7a58a13374c0882bddc35a2
MD5 b0df1ae87f85550ee4cedac7fbf44683
BLAKE2b-256 ec8681149f154cb875490bff54a7fbf8830aca8c30da5e6392b398e886b67756

See more details on using hashes here.

File details

Details for the file neuropt-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: neuropt-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 30.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.6

File hashes

Hashes for neuropt-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 126a76ff6aa8b314ae9aab5db15eaf314365f28bee52d6587a72667ea00f3f01
MD5 1113c6268f4991eed3e27efb1efe1c7b
BLAKE2b-256 fa818a0619c36b682034ddf50a7dd631754178bc0a0bd67879d84b9a42fb3e5e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page