Create object-detection datasets (YOLO) using Moondream
Project description
Overview
MoonLabel is both a Python library and a tiny web UI to generate object-detection datasets quickly.
- Use the library to auto-label folders of images and export YOLO, COCO, or VOC.
- Or launch the UI and visually export YOLO/COCO/VOC with one click.
Backends supported: Moondream Cloud, Moondream Station, or fully local (Hugging Face).
Demo
https://github.com/user-attachments/assets/a2dfc6b6-c83d-4296-986b-ac221e10fc3b
Features
- 📦 Library + UI —
moonlabelpackage with an optional web UI. - 🌐 FastAPI server — Served by a single
moonlabel-uicommand. - ⚛️ Modern frontend — React, TypeScript, TailwindCSS, Vite.
- 🖼️ Object detection — Choose between Moondream Cloud, the open-source Hugging Face model, or the native Moondream Station app.
- ⚡ GPU-accelerated & offline — Local and Station modes automatically use available hardware acceleration (CUDA / MPS).
Install
- Library only (Cloud/Station by default):
pip install moonlabel
- Library + UI server:
pip install "moonlabel[ui]"
- Local inference (Hugging Face) extras:
pip install "moonlabel[local]"
- Both UI and local inference:
pip install "moonlabel[ui,local]"
Quick Start (UI)
pip install "moonlabel[ui]"
moonlabel-ui # opens http://localhost:8342
Choose backend in Settings:
- Moondream Cloud: paste API key
- Moondream Station: set endpoint (default http://localhost:2020/v1)
- Local (Hugging Face): install local extras and select Local
Quick Start (Library)
from moonlabel import create_dataset
# Cloud
create_dataset("/path/to/images", objects=["person"], api_key="YOUR_API_KEY")
# Station
create_dataset("/path/to/images", objects=["car"], station_endpoint="http://localhost:2020/v1")
# Local (after: pip install "moonlabel[local]")
create_dataset("/path/to/images", objects=["bottle"]) # no key needed
By default this exports YOLO. Choose formats via export_format:
# YOLO (default)
create_dataset("/path/to/images", objects=["person"], export_format="yolo")
# COCO
create_dataset("/path/to/images", objects=["person", "car"], export_format="coco")
# Pascal VOC
create_dataset("/path/to/images", objects=["cat", "dog"], export_format="voc")
Output layouts:
- YOLO:
images/,labels/,classes.txt - COCO:
images/,annotations/instances.json,classes.txt - VOC:
images/,annotations/*.xml,classes.txt
Moondream Station Mode
The backend can connect to a running Moondream Station instance for fast, native, on-device inference.
- Download, install, and run Moondream Station.
- Ensure the endpoint matches your Station configuration (default:
http://localhost:2020/v1).
Local Mode (Hugging Face)
The backend can run fully offline using the open-source vikhyatk/moondream2 checkpoint.
pip install "moonlabel[local]"- In the UI, select Local (no API key required).
The first detection will trigger a one-off model download to ~/.cache/huggingface/; subsequent runs reuse the cached weights.
GPU / Device selection
The backend chooses the best device automatically in the following order: CUDA → Apple Silicon (MPS) → CPU.
Override via environment variable before launching the backend:
# Force GPU
export MOONDREAM_DEVICE=cuda
# Force Apple Silicon
export MOONDREAM_DEVICE=mps
# CPU only
export MOONDREAM_DEVICE=cpu
Project Structure
moonlabel/
├── src/moonlabel/ # Python package (library + server)
│ ├── dataset.py # create_dataset API
│ ├── infer.py # Moondream wrapper (cloud/station/local)
│ ├── types.py # shared types
│ ├── utils.py # helpers
│ ├── yolo.py # YOLO label writer
│ ├── coco.py # COCO writer
│ ├── voc.py # Pascal VOC XML writer
│ └── server/ # FastAPI app + static assets
│ ├── api.py
│ ├── cli.py # moonlabel-ui entrypoint (port 8342)
│ └── static/ # embedded UI build (no npm for users)
├── ui/ # Frontend source (for maintainers)
│ └── dist/ # Built files to embed
├── scripts/embed_ui.py # Copies ui/dist → src/moonlabel/server/static
├── Makefile # make ui-build, ui-embed, release
└── pyproject.toml
Roadmap / TODOs
Below are planned enhancements and upcoming features. Contributions welcome!
- Local Hugging Face model support – Offline inference with optional GPU acceleration.
- Moondream Station integration – Native Mac/Linux app support for on-device inference.
- Batch uploads – Label multiple images in one go, with progress tracking.
- Additional export formats – COCO JSON and Pascal VOC alongside YOLO.
License
This project is licensed under the terms of the Apache License 2.0. See LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file moonlabel-0.1.2.tar.gz.
File metadata
- Download URL: moonlabel-0.1.2.tar.gz
- Upload date:
- Size: 170.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
414ff77dbcd1eb09f4cfffc366051fc70cd74ef3ef7bd001c23e683b10f063fb
|
|
| MD5 |
06f23e9a7bb9f60c683d49c0deb71414
|
|
| BLAKE2b-256 |
76db6dfa40456530e03abf027ac6741dace6c482fc76495b10b9051303b6a91d
|
File details
Details for the file moonlabel-0.1.2-py3-none-any.whl.
File metadata
- Download URL: moonlabel-0.1.2-py3-none-any.whl
- Upload date:
- Size: 170.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c48e850c57fd284738546103fa9a9c82c5b0fd32ae250523b53ac7e474b0ccd2
|
|
| MD5 |
ac9b645a34e8c9799438c87d13c2b574
|
|
| BLAKE2b-256 |
d32bf6576c1be4105bcacaa9b2334efcf0da995b5761281d84218a1b0a135fc0
|