Skip to main content

Imitation-first motion learning framework

Project description

optiq: imitation-first motion learning

optiq ingests FBX animations, converts them to Parquet/HDF5 datasets, trains sequence models (Transformer/MLP/UNet1D), and can bootstrap stable-baselines3 RL policies from pretrained encoders. Blender capture helpers remain available, but the primary flow is motion → dataset → model → RL/viz.

Installation

optiq uses optional dependency groups ("extras") to keep the base install lightweight. Install only what you need:

Quick Start (uv)

uv venv .venv
source .venv/bin/activate

# Minimal install (core only: numpy, torch, pydantic, click, scipy, trimesh)
uv pip install -e .

# Development install (everything)
uv pip install -e ".[dev]"

Available Extras

Extra Includes Use Case
ml pytorch-lightning, torchmetrics, mlflow, h5py Model training & experiment tracking
rl stable-baselines3, gymnasium[mujoco], mujoco Reinforcement learning
viz plotly, moviepy, matplotlib Visualization & video rendering
web fastapi, uvicorn, django, sqlmodel Web apps & APIs
infra prometheus-client, kombu, redis Monitoring & task queues
all All of the above Full installation
dev All + pytest, pytest-cov Development & testing

Install Examples

# Just RL and visualization
pip install -e ".[rl,viz]"

# Web deployment with monitoring
pip install -e ".[web,infra]"

# ML training with visualization
pip install -e ".[ml,viz]"

# Everything
pip install -e ".[all]"

pip (alternative to uv)

python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

CLI Overview

optiq provides a unified CLI with global options for reproducibility:

optiq [--seed INT] [--device cpu|cuda|mps|auto] COMMAND

Data Commands

Convert motion files and attach labels:

# Convert JSON → Parquet with velocity computation
optiq data convert --in ground_truth.json --out seq.parquet --compute-velocities

# Attach sequence and frame labels
optiq data label --in seq.parquet --out seq_labeled.parquet \
  --label movement=walk \
  --frame-labels frame_labels.json \
  --label-names phase

Train Commands

Train models using the registry or RL policies:

# Train a model (transformer, mlp, or unet1d)
optiq train model --config configs/model.yaml --arch transformer --out ckpt.pt

# Train an RL policy with optional pretrained encoder
optiq train rl --env Humanoid-v5 --algo ppo --total-steps 300000 --out policy.zip

# With pretrained encoder integration
optiq train rl --env Humanoid-v5 --algo ppo \
  --pretrained encoder.ts \
  --adapter-mode feature \
  --total-steps 300000 \
  --out policy.zip \
  --record-video

Visualization Commands

# Plotly animation from predictions or dataset
optiq viz plotly --pred pred.json --out viz.html --tubes
optiq viz plotly --dataset seq.parquet --out viz.html

# Render policy rollout to video
optiq viz video --policy policy.zip --env Humanoid-v5 --out rollout.mp4

Python API

Data Pipeline

from optiq.data import load_sequence, build_imitation_dataset, build_bc_dataset

seq = load_sequence("seq.parquet")
ds = build_imitation_dataset(seq, mode="next", horizon=1)

Models

from optiq.models import (
    build,
    available,
    list_models,
    load_checkpoint,
    export_torchscript,
)

# Discover models
print(list_models())  # ['mlp', 'transformer', 'unet1d']

# Transformer with causal mask and conditioning
model = build("transformer",
    input_dim=455,
    model_dim=128,
    num_layers=2,
    num_heads=4,
    causal=True,
    conditioning_dim=10,
)

# MLP with horizon (uses last N timesteps)
model = build("mlp",
    input_dim=455,
    hidden_dims=[256, 128],
    horizon=3,
)

# UNet1D with attention and diffusion support
model = build("unet1d",
    input_dim=455,
    base_channels=64,
    num_res_blocks=2,
    attention_layers=[0, 1],
    noise_schedule="linear",
)

# Standard forward signature for all models
out = model(prev_state, condition=cond_vec, context=timestep)

# Save / load checkpoints with metadata
torch.save({
    "arch": "mlp",
    "config": {"input_dim": 455, "output_dim": 455},
    "model_state_dict": model.state_dict(),
}, "ckpt.pth")
loaded_model, meta = load_checkpoint("ckpt.pth")

# Export TorchScript for RL adapters or serving
export_torchscript(model, example_input=torch.zeros(1, 455), out_path="model_ts.pt")

Model Config Templates

  • configs/model_mlp.yaml
  • configs/model_transformer.yaml
  • configs/model_unet1d.yaml

Each template includes optimizer (AdamW), scheduler (cosine), grad clipping, and dimensions for input/output/conditioning. Use them with optiq train model --config configs/model_transformer.yaml.

RL Bootstrapping

from optiq.rl import (
    load_pretrained,
    build_adapters,
    create_bootstrap_artifacts,
    make_policy,
    save_bundle,
    transfer_weights,
)

# Load pretrained encoder (TorchScript or state_dict)
pretrained = load_pretrained("encoder.ts", torchscript_in_dim=455)

# Build adapters for env observation dim matching
adapters = build_adapters(obs_dim, pretrained, adapter_mode="feature")

# Create SB3 policy with feature extractor
model = make_policy("ppo", env, adapter_mode="feature", adapter_spec=adapters)

# Or use one-call convenience function
artifacts = create_bootstrap_artifacts(
    path="encoder.ts",
    env_obs_dim=455,
    adapter_mode="feature",
)

# Save with fallback bundle for TorchScript compatibility
save_bundle(model, adapter=adapters.adapter, path="policy_bundle.pth")

Quickstart (humanoid)

# Extract FBX → Parquet + train kinematic prior + optional Mujoco PPO bootstrap
uv run python examples/humanoid_imitation/run.py --fbx Walking.fbx --ppo-bootstrap

Training Configs

Configs under configs/ point to Parquet by default (convert via optiq data convert):

  • train_cnn_next.json
  • train_unet_diffusion.yaml
  • train_custom.yaml

Development

Setup

# Install development dependencies
uv pip install -e .[dev]

# Setup git hooks for automatic formatting
./setup-hooks.sh

Code Formatting

The pre-commit hook automatically formats code with:

  • black: Code formatting (88 char line length)

To skip formatting for a specific commit:

git commit --no-verify

Testing

# Run all tests
uv run pytest tests/

# Run with coverage
uv run pytest tests/ --cov=optiq --cov-report=html

Legacy Blender capture (still available)

  • Capture attributes in Blender and export to JSON/CSV/Parquet with capture_selected_object / capture_object.
  • Convert captures to temporal graphs with capture_to_temporal_graph and sequence batches with make_sequence_batches.
  • Example scripts remain under examples/ for Blender-driven captures.

License

MIT License © Ted T.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optiq-0.1.0.tar.gz (719.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

optiq-0.1.0-py3-none-any.whl (109.8 kB view details)

Uploaded Python 3

File details

Details for the file optiq-0.1.0.tar.gz.

File metadata

  • Download URL: optiq-0.1.0.tar.gz
  • Upload date:
  • Size: 719.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for optiq-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c1a577bf847aab10b1afc1e459085b154499117b53de150501b3c1c58ee16f92
MD5 39388a21ed9b03d5b239281538dbb8e4
BLAKE2b-256 be635661d5cedc001da7bd25069018e5fa754f7a43ed49b15a3ac4d7b9b5580e

See more details on using hashes here.

Provenance

The following attestation bundles were made for optiq-0.1.0.tar.gz:

Publisher: publish.yml on tedtroxell/optiq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file optiq-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: optiq-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 109.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for optiq-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c1805553b13012141b9d44a90738d0a81980486538ae1968e0f85c1b3aee4504
MD5 270ce4385300ed31fb4ea984ffa1467d
BLAKE2b-256 7f5014291e327895113a74bbe21f46b52814af96f50bcd5a3516a9f3d44c11d8

See more details on using hashes here.

Provenance

The following attestation bundles were made for optiq-0.1.0-py3-none-any.whl:

Publisher: publish.yml on tedtroxell/optiq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page