Skip to main content

Imitation learning benchmark focusing on complex locomotion tasks using MuJoCo.

Project description

continous integration Documentation Status License: MIT PyPI Join our Discord

🚀 Latest News: A major release (v1.0) just dropped! 🎉
LocoMuJoCo now supports MJX and comes with new Jax algorithms. We also added many new environments and +22k datasets! 🚀

LocoMuJoCo is an imitation learning benchmark specifically designed for whole-body control.
It features a diverse set of environments, including quadrupeds, humanoids, and (musculo-)skeletal human models, each provided with comprehensive datasets (over 22,000 samples per humanoid).

Although primarily focused on imitation learning, LocoMuJoCo also supports custom reward function classes,
making it suitable for pure reinforcement learning as well.

Key Advantages

✅ Supports MuJoCo (single environment) and MJX (parallel environments)
✅ Includes 12 humanoid and 4 quadruped environments, featuring 4 biomechanical human models
✅ Clean single-file JAX algorithms for quick benchmarking (PPO, GAIL, AMP, DeepMimic)
✅ Combined training and environment into one JIT‑compiled function for lightning‑fast training 🚀
Over 22,000 motion capture datasets (AMASS, LAFAN1, native LocoMuJoCo) retargeted for each humanoid
Robot-to-robot retargeting allows to retarget any existing dataset from one robot to another
✅ Powerful trajectory comparison metrics including dynamic time warping and discrete Fréchet distance, all in JAX
✅ Interface for Gymnasium
✅ Built-in domain and terrain randomization
✅ Modular design: define, swap, and reuse components like observation types, reward functions, terminal state handlers, and domain randomization
Documentation


Installation

You have the choice to install the latest release via PyPI by running

pip install loco-mujoco 

Or, clone this repo and do an editable installation:

cd loco-mujoco
pip install -e . 

By default, both will install the CPU-version of Jax. If you want to use Jax on the GPU, you need to install the following:

pip install jax["cuda12"]

[!NOTE] If you want to run the MyoSkeleton environment, you need to additionally run loco-mujoco-myomodel-init to accept the license and download the model.

Datasets

LocoMuJoCo provides three sources of motion capture (mocap) data for humanoid environments: default (provided by us), LAFAN1, and AMASS. The first two datasets are available on the LocoMujoCo HuggingFace dataset repository and will downloaded and cached automatically for you. AMASS needs to be downloaded and installed separately due to their licensing. See here for more information about the installation.

This is how you can visualize the datasets:

from loco_mujoco.task_factories import ImitationFactory, LAFAN1DatasetConf, DefaultDatasetConf, AMASSDatasetConf


# # example --> you can add as many datasets as you want in the lists!
env = ImitationFactory.make("UnitreeH1",
                            default_dataset_conf=DefaultDatasetConf(["squat"]),
                            lafan1_dataset_conf=LAFAN1DatasetConf(["dance2_subject4", "walk1_subject1"]),
                            # if SMPL and AMASS are installed, you can use the following:
                            #amass_dataset_conf=AMASSDatasetConf(["DanceDB/DanceDB/20120911_TheodorosSourmelis/Capoeira_Theodoros_v2_C3D_poses"])
                            )

env.play_trajectory(n_episodes=3, n_steps_per_episode=500, render=True)

Speeding up Dataset Loading

LocoMuJoCo only stores datasets with joint positions and velocities to save memory. All other attributes are calculated using forward kinematics upon loading. If you want to speed up the dataset loading, you can define caches for the datasets. This will store the forward kinematics results in a cache file, which will be loaded on the next run:

loco-mujoco-set-all-caches --path <path to cache>

For instance, you could run:

loco-mujoco-set-all-caches --path "$HOME/.loco-mujoco-caches"

Environments

You want a quick overview of all environments available? You can find it here and more detailed in the Documentation.

And stay tuned! There are many more to come ...


Tutorials

We provide a set of tutorials to help you get started with LocoMuJoCo. You can find them in the tutorials folder or with more explanation in the documentation.

If you want to check out training examples of a PPO, GAIL, AMP, or DeepMimic agent, you can find them in the training examples folder. For instance, here is an example of a DeepMimic agent you can train to achieve a human-like walking in all directions, which was trained in 36 min on an RTX 3080 Ti:


Citation

@inproceedings{alhafez2023b,
title={LocoMuJoCo: A Comprehensive Imitation Learning Benchmark for Locomotion},
author={Firas Al-Hafez and Guoping Zhao and Jan Peters and Davide Tateo},
booktitle={6th Robot Learning Workshop, NeurIPS},
year={2023}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

loco-mujoco-1.0.0.tar.gz (188.2 kB view details)

Uploaded Source

File details

Details for the file loco-mujoco-1.0.0.tar.gz.

File metadata

  • Download URL: loco-mujoco-1.0.0.tar.gz
  • Upload date:
  • Size: 188.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.4

File hashes

Hashes for loco-mujoco-1.0.0.tar.gz
Algorithm Hash digest
SHA256 7411920228d7a4f03e3034fa45719cbed3514a94f881ef45240423213ee846f3
MD5 be3364420374b8a9641f3958d418b600
BLAKE2b-256 f01bef182b31952f85781524598c17973c2a760f82637ac4f5bda51a081e7144

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page