Skip to main content

An API designed to make OpenMMLab EZ

Project description

🚀 ez_openmmlab: OpenMMLab Made EZ

Utilize OpenMMLab using an EZ and Familiar API ;)

PyPI version Python 3.10 License: Apache 2.0

ez_openmmlab is a high-level, TOML-first wrapper that makes SOTA OpenMMLab models (RTMDet, RTMPose, and RTMO) actually EZ to use. Stop fighting with 500-line Python configs and dataset registries—just write a few lines of TOML and get back to building.

💡 New to ez_openmmlab? Check out the demos/ folder for complete end-to-end examples!


🏋️ 1. Train

Forget framework-level "surgery". Define your data in a simple dataset.toml, call .train(), and ez_openmmlab handles the rest.

Step A: Define your data (dataset.toml)

No more manual registration. Just point to your files.

data_root = "datasets/my_project"
classes = ["cat", "dog"]

[train]
ann_file = "annotations/train.json"
img_dir = "images/train"

[val]
ann_file = "annotations/val.json"
img_dir = "images/val"

Step B: Launch Training

One method. I'm sure this is familiar for most of you ;)

from ez_openmmlab import RTMDet

# Initialize (choices: rtmdet_tiny, rtmdet_s, rtmdet_m, rtmdet_l, rtmdet_x)
model = RTMDet("rtmdet_tiny")

# Start training - outputs user_config.toml for easy reloading
model.train(
    dataset_config_path="dataset.toml",
    epochs=100,
    batch_size=16,
)

Resume Interrupted Training

Training got interrupted? No problem - just resume where you left off:

from ez_openmmlab import RTMDet

# Load from your previous run
model = RTMDet(model="path/to/user_config.toml") # provide the user_config.toml of the interrupted training

# Resume training with new epoch count
model.resume()  # Continues from last checkpoint

Training Demo


🔍 2. Inference

Load your trained model or use pretrained weights. Predict and visualize with a single line.

from ez_openmmlab import RTMDet

# Option 1: Load your trained model
model = RTMDet(
    model="user_config.toml",      # Config generated during training
    checkpoint_path="epoch_100.pth" # Your trained checkpoint
)

# Option 2: Use pretrained model
model = RTMDet("rtmdet_s")  # Auto-downloads pretrained weights

# Run inference
results = model.predict("sample.jpg", show=True)

# Access clean, structured results
for box in results[0].boxes:
    print(f"Class: {box.cls}, Score: {box.conf:.3f}, BBox: {box.xyxy}")

Inference Demo


🚢 3. Export

Deploying to production with mmdeploy is usually a nightmare. We simplified it to one command using MMDeploy via Docker.

from ez_openmmlab import RTMDet

# Load your model (trained or pretrained)
model = RTMDet(
    model="user_config.toml",
    checkpoint_path="epoch_100.pth"
)

# Export to ONNX or TensorRT
model.export(
    format="onnx",        # Options: 'onnx' or 'tensorrt'
    image="sample.jpg",   # Required for model tracing
    output_dir="deploy/", # Where to save artifacts
    device="cpu"          # Use 'cuda' for TensorRT
)

Export Demo


🧘 Custom Pose Estimation? Still EZ.

Training on custom keypoints? Just add your metainfo to the TOML. You can add as many keypoints as your dataset requires.

# pose_dataset.toml
data_root = "datasets/custom_pose"
classes = ["dog"]

[train]
ann_file = "annotations/train.json"
img_dir = "images/train"

[val]
ann_file = "annotations/val.json"
img_dir = "images/val"

[metainfo]
sigmas = [0.025, 0.025, 0.05]  # One per keypoint
joint_weights = [1.0, 1.0, 1.0] # One per key point

[metainfo.keypoint_info.0]
name = "nose"
id = 0
color = [51, 153, 255]

[metainfo.keypoint_info.1]
name = "left_eye"
id = 1
color = [51, 153, 255]

[metainfo.keypoint_info.2]
name = "right_eye"
id = 2
color = [51, 153, 255]

[metainfo.skeleton_info.0]
link = ["nose", "left_eye"]
id = 0

[metainfo.skeleton_info.1]
link = ["nose", "right_eye"]
id = 1

# ADD AS MANY KEYPOINT INFO AS YOU NEED
from ez_openmmlab import RTMPose

# Initialize (choices: rtmpose_tiny, rtmpose_s, rtmpose_m, ...)
model = RTMPose("rtmpose_s")
model.train(dataset_config_path="pose_dataset.toml", epochs=210)

# Inference with your custom keypoints
results = model.predict("person.jpg", show=True)

🛠️ Installation

Requirements

  • Python 3.9 or 3.10
  • NVIDIA GPU with CUDA 11.7 (for GPU version)
  • Linux or Windows
  • Git

Installation

Quick Install (Recommended)

Our install scripts support both uv (faster) and pip (traditional). You'll be prompted to choose during installation.

GPU (CUDA 11.7):

curl -sSL https://raw.githubusercontent.com/JustAnalyze/ez_openmmlab/main/install.sh | bash

CPU:

curl -sSL https://raw.githubusercontent.com/JustAnalyze/ez_openmmlab/main/install-cpu.sh | bash

💡 Tip: Choose uv when prompted for 10-100x faster installation!

Manual Installation

⚠️ Important: Steps must be followed in order. Installing chumpy before ez-openmmlab is required due to a known issue with the upstream chumpy package.

GPU (CUDA 11.7)

# Step 1: Install PyTorch with CUDA support
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 \
    --index-url https://download.pytorch.org/whl/cu117

# Step 2: Install MMCV with CUDA support
pip install mmcv==2.1.0 \
    -f https://download.openmmlab.com/mmcv/dist/cu117/torch2.0/index.html

# Step 3: Install chumpy (fixed version)
pip install git+https://github.com/JustAnalyze/chumpy.git@master

# Step 4: Install ez-openmmlab
pip install ez-openmmlab

CPU Only

# Step 1: Install PyTorch (CPU)
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 \
    --index-url https://download.pytorch.org/whl/cpu

# Step 2: Install MMCV (CPU)
pip install mmcv==2.1.0 \
    -f https://download.openmmlab.com/mmcv/dist/cpu/torch2.0/index.html

# Step 3: Install chumpy (fixed version)
pip install git+https://github.com/JustAnalyze/chumpy.git@master

# Step 4: Install ez-openmmlab
pip install ez-openmmlab

Why install chumpy manually? The upstream chumpy package (v0.70) has a broken setup.py that causes installation failures with modern Python packaging tools. We maintain a fixed fork that resolves these issues.


✨ Key Features

  • EZ Environment: Reproducible setups that just work via uv.
  • EZ Configuration: Human-readable TOML replaces complex Python config inheritance.
  • Auto-Magic Checkpoints: Missing weights? We download them for you automatically.
  • Strict Validation: Powered by Pydantic to catch errors before you start your run.
  • Performance Optimized: Vectorized, NumPy-first results with Lazy Initialization.
  • Flexible Model Loading: Load pretrained models or your own trained checkpoints seamlessly.

📚 Quick Start Examples

Object Detection Workflow

from ez_openmmlab import RTMDet

# 1. Train on custom data
model = RTMDet("rtmdet_s")
model.train(dataset_config_path="dataset.toml", epochs=100)

# 2. Inference with trained model
model = RTMDet(model="user_config.toml", checkpoint_path="epoch_100.pth")
results = model.predict("test_image.jpg", show=True)

# 3. Export for deployment
model.export(format="onnx", image="test_image.jpg", output_dir="deploy/")

Pose Estimation Workflow

from ez_openmmlab import RTMPose

# 1. Train on custom keypoints
model = RTMPose("rtmpose_m")
model.train(dataset_config_path="pose_dataset.toml", epochs=210)

# 2. Inference
model = RTMPose(model="user_config.toml", checkpoint_path="best_model.pth")
results = model.predict("person.jpg", show=True)

# Access keypoint coordinates
for person in results[0].keypoints:
    print(f"Keypoints: {person.xy}")  # Shape: [num_keypoints, 2]

🗺️ Roadmap

  • Resume Training: Continue from interrupted training sessions.
  • Native Export: One-click .export() to ONNX and TensorRT.
  • Full CLI: Run training and inference directly from your terminal.
  • Architecture Expansion: Bringing the "EZ" treatment to more OpenMMLab models. (This is a good candidate: https://github.com/53mins/CIGPose)

🤝 Acknowledgements

ez_openmmlab wouldn't exist without the relentless research and engineering of the OpenMMLab team.

Currently Supported:

  • Detection & Segmentation: rtmdet, rtmdet-ins
  • 2D Pose Estimation: rtmpose, rtmo

📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


🐛 Issues & Contributions

Found a bug? Have a feature request? Open an issue!

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.


📖 Learn More

  • Demo Examples - Complete end-to-end workflows with datasets
  • Issues - Report bugs or request features

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ez_openmmlab-0.1.0-py3-none-any.whl (76.1 kB view details)

Uploaded Python 3

File details

Details for the file ez_openmmlab-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ez_openmmlab-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 76.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"43","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for ez_openmmlab-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6f036a2deeced33ff429801303cfb23bad67eecbc7ea025dd68440757c032827
MD5 c139509a448ec830936806c3e3055644
BLAKE2b-256 444d5cff5e85cd0079fae01f946dfefffc5189ffeaa857a8b48c37cd9f183fd6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page