Skip to main content

Simple LLM fine-tuning with LoRA and DeepSpeed

Project description

neotune

Simple LLM fine-tuning with LoRA and DeepSpeed.

Three inputs: a model, your datasets, and hyperparameters.

Installation

pip install neotune

With optional extras:

pip install "neotune[ray]"       # Ray distributed training
pip install "neotune[logging]"   # MLflow + Weights & Biases
pip install "neotune[all]"       # everything

Quick Start

from neotune import finetune

results = finetune(
    model="google/gemma-3-4b-pt",
    datasets={"train": train_ds, "validation": val_ds},
    hyperparameters={"learning_rate": 2e-4, "num_train_epochs": 3},
)

Each dataset is a HuggingFace Dataset with a "text" column containing fully-formatted prompt/response strings, or pre-tokenized columns (input_ids, attention_mask, labels).

Preparing Your Datasets

neotune expects you to bring your own HuggingFace Dataset objects.

From a HuggingFace dataset

from datasets import load_dataset
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-4b-pt")

ds = load_dataset("tatsu-lab/alpaca", split="train")

def format_example(example):
    messages = [
        {"role": "user", "content": example["instruction"]},
        {"role": "assistant", "content": example["output"]},
    ]
    return {"text": tokenizer.apply_chat_template(messages, tokenize=False)}

ds = ds.map(format_example, remove_columns=ds.column_names)
splits = ds.train_test_split(test_size=0.1, seed=42)

# splits["train"] and splits["test"] each have a "text" column

From a CSV file

from datasets import load_dataset
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-4b-pt")

ds = load_dataset("csv", data_files="data.csv", split="train")

def format_example(example):
    messages = [
        {"role": "user", "content": example["prompt"]},
        {"role": "assistant", "content": example["response"]},
    ]
    return {"text": tokenizer.apply_chat_template(messages, tokenize=False)}

ds = ds.map(format_example, remove_columns=ds.column_names)
splits = ds.train_test_split(test_size=0.1, seed=42)

API Reference

finetune(model, datasets, hyperparameters) -> dict

One-call convenience function. Returns test-set metrics if a "test" split was provided, otherwise an empty dict.

from neotune import finetune

results = finetune(
    model="google/gemma-3-4b-pt",
    datasets={"train": train_ds, "validation": val_ds, "test": test_ds},
    hyperparameters={"learning_rate": 2e-4},
)

NeoTune(model, datasets, hyperparameters)

Class-based API.

from neotune import NeoTune

nt = NeoTune(
    model="google/gemma-3-4b-pt",
    datasets={"train": train_ds, "validation": val_ds},
    hyperparameters={"num_train_epochs": 5, "output_dir": "./my-adapter"},
)

results = nt.train()

Parameters

model -- str HuggingFace model ID or local path.

datasets -- dict[str, Dataset] A dict of HuggingFace Dataset objects. "train" is required. "validation" and "test" are optional.

hyperparameters -- dict, optional Override any default. All keys are optional:

Key Default Description
Training
learning_rate 1e-4 Learning rate
num_train_epochs 3 Number of training epochs
batch_size 1 Per-device batch size
gradient_accumulation_steps 4 Gradient accumulation steps
warmup_ratio 0.03 Warmup ratio
weight_decay 0.01 Weight decay
bf16 True bfloat16 mixed precision
gradient_checkpointing False Gradient checkpointing
logging_steps 10 Log every N steps
eval_steps 50 Evaluate every N steps
save_steps 100 Checkpoint every N steps
save_total_limit 3 Max checkpoints to keep
LoRA
lora_r 16 LoRA rank
lora_alpha 32 LoRA alpha
lora_dropout 0.05 LoRA dropout
lora_target_modules "all-linear" Target modules (auto-detects all linear layers)
Output
output_dir "./adapter-output" Where to save the adapter
hf_repo None Push to HuggingFace Hub
DeepSpeed
ds_config None None (auto: DeepSpeed ZeRO-2 on multi-GPU, disabled on single GPU), "auto" (force ZeRO-2), False (force off), a file path, or a dict
Data
max_len 2048 Max sequence length
dataset_text_field "text" Column name for training text

Methods

  • .train() -> dict -- Fine-tune and return test metrics (if test split provided).
  • .tokenizer -- Access the underlying tokenizer.

Compute Environments

neotune auto-detects your hardware and launches training accordingly — same finetune() call everywhere:

Environment What happens
CPU Trains on CPU (float32)
Single CUDA GPU Single-GPU training (bf16)
Multi CUDA GPU Distributed training via accelerate launch + DeepSpeed ZeRO-2
Apple Silicon (MPS) MPS acceleration (fp16)

Multi-GPU works from scripts, notebooks, and the Python REPL — no need for torchrun or deepspeed CLI. neotune handles the distributed launch automatically.

Override with ds_config:

# Force DeepSpeed off
finetune(model, datasets, {"ds_config": False})

# Force DeepSpeed on (even on single GPU)
finetune(model, datasets, {"ds_config": "auto"})

# Custom DeepSpeed config
finetune(model, datasets, {"ds_config": "my_ds_config.json"})

Advanced Usage

Generative evaluation

from neotune.eval import generate_and_evaluate

results = generate_and_evaluate(
    model_id="google/gemma-3-4b-pt",
    adapter_dir="./my-adapter",
    test_ds=test_ds,
    prompt_col="instruction",
    label_col="expected_output",
)

Distributed training with DeepSpeed (CLI)

deepspeed --num_gpus 4 -m neotune.train --config config.yaml --mode train

Distributed training with Ray

python -m neotune.ray_train --config config.yaml --num_workers 4

Kubernetes (KubeRay)

See k8s/rayjob-lora-sft.yaml for a KubeRay RayJob template.

Environment Variables

Variable Description
HF_TOKEN HuggingFace access token (for gated models)
WANDB_API_KEY Weights & Biases API key (optional)

Create a .env file in your working directory:

HF_TOKEN=your_token_here
WANDB_API_KEY=your_wandb_key_here

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neotune-1.1.1.tar.gz (18.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neotune-1.1.1-py3-none-any.whl (20.6 kB view details)

Uploaded Python 3

File details

Details for the file neotune-1.1.1.tar.gz.

File metadata

  • Download URL: neotune-1.1.1.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for neotune-1.1.1.tar.gz
Algorithm Hash digest
SHA256 b8fead0d75f207855d85d1cf6231dfc3ec8ac358aa9bc44f744bad4ceeda0db3
MD5 26041dd5515ddf6be3064b9730f20528
BLAKE2b-256 2c4fa57f177701a113db95c63f0913f2f638e9b6ac7f1f8e1de6b6fa8de11665

See more details on using hashes here.

File details

Details for the file neotune-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: neotune-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 20.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for neotune-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b6dd39dae819995c08b9aeec4e6acc4f0cbbb51b291e791ac5a45d6d386a713b
MD5 63ae2f699babd7e64a6d724c2b6ed3de
BLAKE2b-256 14ab8846f57d170ac7ed931a6fb5165df363b32963152f2fbfd4f10ab17f9817

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page