Simple LLM fine-tuning with LoRA and DeepSpeed
Project description
neotune
Simple LLM fine-tuning with LoRA and DeepSpeed.
Three inputs: a model, your datasets, and hyperparameters.
Installation
pip install neotune
With optional extras:
pip install "neotune[ray]" # Ray distributed training
pip install "neotune[logging]" # MLflow + Weights & Biases
pip install "neotune[all]" # everything
Quick Start
from neotune import finetune
results = finetune(
model="google/gemma-3-4b-pt",
datasets={"train": train_ds, "validation": val_ds},
hyperparameters={"learning_rate": 2e-4, "num_train_epochs": 3},
)
Each dataset is a HuggingFace Dataset with a "text" column containing fully-formatted prompt/response strings, or pre-tokenized columns (input_ids, attention_mask, labels).
Preparing Your Datasets
neotune expects you to bring your own HuggingFace Dataset objects.
From a HuggingFace dataset
from datasets import load_dataset
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-4b-pt")
ds = load_dataset("tatsu-lab/alpaca", split="train")
def format_example(example):
messages = [
{"role": "user", "content": example["instruction"]},
{"role": "assistant", "content": example["output"]},
]
return {"text": tokenizer.apply_chat_template(messages, tokenize=False)}
ds = ds.map(format_example, remove_columns=ds.column_names)
splits = ds.train_test_split(test_size=0.1, seed=42)
# splits["train"] and splits["test"] each have a "text" column
From a CSV file
from datasets import load_dataset
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-4b-pt")
ds = load_dataset("csv", data_files="data.csv", split="train")
def format_example(example):
messages = [
{"role": "user", "content": example["prompt"]},
{"role": "assistant", "content": example["response"]},
]
return {"text": tokenizer.apply_chat_template(messages, tokenize=False)}
ds = ds.map(format_example, remove_columns=ds.column_names)
splits = ds.train_test_split(test_size=0.1, seed=42)
API Reference
finetune(model, datasets, hyperparameters) -> dict
One-call convenience function. Returns test-set metrics if a "test" split was provided, otherwise an empty dict.
from neotune import finetune
results = finetune(
model="google/gemma-3-4b-pt",
datasets={"train": train_ds, "validation": val_ds, "test": test_ds},
hyperparameters={"learning_rate": 2e-4},
)
NeoTune(model, datasets, hyperparameters)
Class-based API.
from neotune import NeoTune
nt = NeoTune(
model="google/gemma-3-4b-pt",
datasets={"train": train_ds, "validation": val_ds},
hyperparameters={"num_train_epochs": 5, "output_dir": "./my-adapter"},
)
results = nt.train()
Parameters
model -- str
HuggingFace model ID or local path.
datasets -- dict[str, Dataset]
A dict of HuggingFace Dataset objects. "train" is required. "validation" and "test" are optional.
hyperparameters -- dict, optional
Override any default. All keys are optional:
| Key | Default | Description |
|---|---|---|
| Training | ||
learning_rate |
1e-4 |
Learning rate |
num_train_epochs |
3 |
Number of training epochs |
batch_size |
1 |
Per-device batch size |
gradient_accumulation_steps |
4 |
Gradient accumulation steps |
warmup_ratio |
0.03 |
Warmup ratio |
weight_decay |
0.01 |
Weight decay |
bf16 |
True |
bfloat16 mixed precision |
gradient_checkpointing |
False |
Gradient checkpointing |
logging_steps |
10 |
Log every N steps |
eval_steps |
50 |
Evaluate every N steps |
save_steps |
100 |
Checkpoint every N steps |
save_total_limit |
3 |
Max checkpoints to keep |
| LoRA | ||
lora_r |
16 |
LoRA rank |
lora_alpha |
32 |
LoRA alpha |
lora_dropout |
0.05 |
LoRA dropout |
lora_target_modules |
"all-linear" |
Target modules (auto-detects all linear layers) |
| Output | ||
output_dir |
"./adapter-output" |
Where to save the adapter |
hf_repo |
None |
Push to HuggingFace Hub |
| DeepSpeed | ||
ds_config |
None |
None (auto: DeepSpeed ZeRO-2 on multi-GPU, disabled on single GPU), "auto" (force ZeRO-2), False (force off), a file path, or a dict |
| Data | ||
max_len |
2048 |
Max sequence length |
dataset_text_field |
"text" |
Column name for training text |
Methods
.train()->dict-- Fine-tune and return test metrics (if test split provided)..tokenizer-- Access the underlying tokenizer.
Device & DeepSpeed
neotune auto-detects your hardware:
| Device | Default behavior |
|---|---|
| CPU | Trains on CPU (float32) |
| Single CUDA GPU | Standard GPU training (bf16) |
| Multi CUDA GPU | DeepSpeed ZeRO-2 auto-enabled (bf16) |
| Apple Silicon (MPS) | MPS acceleration (fp16, no DeepSpeed) |
Override with ds_config:
# Force DeepSpeed off (e.g. multi-GPU notebook without mpi4py)
finetune(model, datasets, {"ds_config": False})
# Force DeepSpeed on (even on single GPU)
finetune(model, datasets, {"ds_config": "auto"})
# Custom DeepSpeed config
finetune(model, datasets, {"ds_config": "my_ds_config.json"})
Notebook users: If you have multiple GPUs but get
ModuleNotFoundError: No module named 'mpi4py', either install it (pip install mpi4py) or disable DeepSpeed with"ds_config": False.
Advanced Usage
Generative evaluation
from neotune.eval import generate_and_evaluate
results = generate_and_evaluate(
model_id="google/gemma-3-4b-pt",
adapter_dir="./my-adapter",
test_ds=test_ds,
prompt_col="instruction",
label_col="expected_output",
)
Distributed training with DeepSpeed (CLI)
deepspeed --num_gpus 4 -m neotune.train --config config.yaml --mode train
Distributed training with Ray
python -m neotune.ray_train --config config.yaml --num_workers 4
Kubernetes (KubeRay)
See k8s/rayjob-lora-sft.yaml for a KubeRay RayJob template.
Environment Variables
| Variable | Description |
|---|---|
HF_TOKEN |
HuggingFace access token (for gated models) |
WANDB_API_KEY |
Weights & Biases API key (optional) |
Create a .env file in your working directory:
HF_TOKEN=your_token_here
WANDB_API_KEY=your_wandb_key_here
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neotune-0.9.1.tar.gz.
File metadata
- Download URL: neotune-0.9.1.tar.gz
- Upload date:
- Size: 16.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3a12fa680462bfc4b7e7e223485fa04d6a3766c2b9c1b1ce90a5ae5142e2454
|
|
| MD5 |
db31194fbfd0aa7ef3dcf8e9d4d1f02c
|
|
| BLAKE2b-256 |
e46e6dfbbe245f40cc205d3eef212754407ae36333339a0ecc2401a2c6683b86
|
File details
Details for the file neotune-0.9.1-py3-none-any.whl.
File metadata
- Download URL: neotune-0.9.1-py3-none-any.whl
- Upload date:
- Size: 17.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d2e6ef09d594cbca33e6f3352558e0e3e4ababe34bca5e7f63bdcd4e1160f5df
|
|
| MD5 |
5b3d0e533fe452af12cda685d17cca8e
|
|
| BLAKE2b-256 |
535fe149bf9b8343c3b69cbb2f81c34391557ae506f62a8c697819bfe6486393
|