No-code labeling and training toolkit for computer vision
Project description
traincv
No-code computer vision training — from dataset to deployable model in one command.
traincv lets you train production-grade computer vision models (detection, classification, segmentation) with a single command or a 3-line Python API. It wraps Ultralytics YOLO and torchvision behind a unified interface, auto-converts between common dataset formats (COCO, YOLO, Pascal VOC), offers zero-shot auto-labeling via SAM2 / YOLO-World / CLIP, and exports trained models to ONNX, TorchScript, or TFLite ready for deployment.
Built by Viet-Anh Nguyen at NRL.ai.
Why traincv?
- One-liner API —
traincv.train("dataset/", task="detect")is a complete pipeline - Plugin architecture — Register custom backbones, losses, and exporters
- Local-first — Trains on your GPU, no cloud credits or upload required
- Minimal core deps — Training stacks (YOLO / torchvision) are optional extras
- Production-ready — Auto-splitting, logging, checkpointing, resumable runs
Installation
pip install traincv
For training backends:
pip install traincv[yolo] # Ultralytics YOLOv8/v11 for detect + segment
pip install traincv[torch] # torchvision for classification + DeepLabV3
pip install traincv[autolabel] # SAM2 + YOLO-World + CLIP for auto-annotation
pip install traincv[export] # ONNX + TorchScript + TFLite exporters
pip install traincv[all] # everything
Python 3.8+ supported (tested on 3.8, 3.9, 3.10, 3.11, 3.12, 3.13)
Quick Start
import traincv
# 1. Train an object detector (auto-detects YOLO-format dataset and uses YOLOv8n)
run = traincv.train(
"datasets/pets/", # expects images/ + labels/ or data.yaml
task="detect",
model="yolov8n",
epochs=50,
imgsz=640,
)
print(run.best_map50) # best mAP50 on validation
print(run.weights_path) # path to best checkpoint
# 2. Export to ONNX for deployment (compatible with anycv / anydeploy)
traincv.export(run.weights_path, format="onnx", out="pets.onnx")
# 3. Use the trained model immediately
preds = traincv.predict("pets.onnx", "new_image.jpg")
Models & Methods
Training backends
| Task | Backend | Models | Notes |
|---|---|---|---|
| Object Detection | Ultralytics | yolov8n/s/m/l/x, yolov11n/s/m/l/x |
COCO-pretrained, transfer-learning by default |
| Instance Segmentation | Ultralytics | yolov8n-seg, yolov11n-seg |
Mask + box outputs |
| Classification | torchvision | mobilenetv2, mobilenetv3, resnet18/50, efficientnet_b0 |
ImageNet-pretrained |
| Semantic Segmentation | torchvision | deeplabv3_resnet50, deeplabv3_mobilenetv3 |
Pascal VOC-pretrained |
Dataset formats (auto-detected)
- YOLO —
images/+labels/(one.txtper image) +data.yaml - COCO —
annotations.jsonwithimages/annotations/categories - Pascal VOC —
JPEGImages/+Annotations/*.xml - ImageFolder — One sub-folder per class (classification only)
traincv converts between formats on the fly:
traincv.convert("coco_dataset/", to="yolo", out="yolo_dataset/")
Auto-labeling (zero-shot annotation)
Install traincv[autolabel] to bootstrap labels without manual work:
| Method | Model | Best for |
|---|---|---|
| SAM2 | Meta Segment Anything 2 | Instance masks from point/box prompts |
| YOLO-World | Open-vocabulary YOLO | Detection from text prompts ("a red car") |
| CLIP | OpenAI CLIP | Zero-shot image classification |
# Generate YOLO-format bounding boxes using only text prompts
traincv.autolabel(
"unlabeled_images/",
method="yolo-world",
classes=["person", "dog", "bicycle"],
out="auto_labels/",
)
Export formats
| Format | Via | Use case |
|---|---|---|
| ONNX | torch.onnx.export / ultralytics exporter |
Cross-platform inference (anycv, anydeploy) |
| TorchScript | torch.jit.trace |
Python-free PyTorch inference |
| TFLite | TensorFlow converter | Mobile / embedded |
API Reference
| Function | Purpose |
|---|---|
traincv.train(dataset, task, model, epochs, ...) |
Train a model, returns TrainRun |
traincv.predict(weights, image) |
Inference with a trained model |
traincv.evaluate(weights, dataset) |
Compute metrics on a held-out split |
traincv.export(weights, format, out) |
Export to ONNX / TorchScript / TFLite |
traincv.convert(src, to, out) |
Dataset format conversion |
traincv.autolabel(images, method, classes) |
Zero-shot annotation |
traincv.split(dataset, ratios=(0.8, 0.1, 0.1)) |
Auto train/val/test split |
CLI Usage
# Train
traincv train datasets/pets/ --task detect --model yolov8n --epochs 50
# Evaluate
traincv evaluate runs/detect/best.pt --data datasets/pets/
# Export
traincv export runs/detect/best.pt --format onnx --out pets.onnx
# Auto-label with text prompts
traincv autolabel unlabeled/ --method yolo-world --classes "person,dog,cat"
# Convert datasets
traincv convert coco_ds/ --to yolo --out yolo_ds/
Examples
Full end-to-end: label, train, export
import traincv
# 1. Auto-label raw images with SAM2 + YOLO-World
traincv.autolabel("raw/", method="yolo-world",
classes=["product", "price_tag"], out="labeled/")
# 2. Split into train/val/test (80/10/10)
traincv.split("labeled/", ratios=(0.8, 0.1, 0.1))
# 3. Train YOLOv8s
run = traincv.train("labeled/", task="detect", model="yolov8s", epochs=100)
# 4. Export for mobile deployment
traincv.export(run.weights_path, format="tflite", out="products.tflite")
Fine-tune a classifier on ImageFolder data
import traincv
run = traincv.train(
"flowers/", # flowers/daisy/*.jpg, flowers/rose/*.jpg, ...
task="classify",
model="mobilenetv3",
epochs=30,
lr=1e-3,
)
print(run.best_accuracy)
Resume an interrupted run
traincv.train("datasets/pets/", task="detect", resume="runs/detect/exp5/last.pt")
License
MIT (c) Viet-Anh Nguyen
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file traincv-0.2.4.tar.gz.
File metadata
- Download URL: traincv-0.2.4.tar.gz
- Upload date:
- Size: 36.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
27f97d70334e969b480a9688c6c95be32fff9f57cecdb001b6d13b2d4d1a8314
|
|
| MD5 |
b56f67e0d3f188abc3c54965b844418c
|
|
| BLAKE2b-256 |
4ad8836d89a74f02d05b45f82b1fb64252e4e686f439ac3f33146451949ba190
|
File details
Details for the file traincv-0.2.4-py3-none-any.whl.
File metadata
- Download URL: traincv-0.2.4-py3-none-any.whl
- Upload date:
- Size: 34.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c335320b3e3f5835a98838f56386d12985314b4f49e526fc64f05c5af0535759
|
|
| MD5 |
fab1f2576a52b547091c020b52fc91cb
|
|
| BLAKE2b-256 |
6af2f52bbb7aa164c87b24c17a4d4a082b4b33bb0f5b7a7a6929de4c8929144e
|