Skip to main content

Imitation learning benchmark focusing on complex locomotion tasks using MuJoCo.

Project description

continous integration Documentation Status License: MIT PyPI Join our Discord

LocoMuJoCo is an imitation learning benchmark specifically targeted towards locomotion. It encompasses a diverse set of environments, including quadrupeds, bipeds, and musculoskeletal human models, each accompanied by comprehensive datasets, such as real noisy motion capture data, ground truth expert data, and ground truth sub-optimal data, enabling evaluation across a spectrum of difficulty levels.

LocoMuJoCo also allows you to specify your own reward function to use this benchmark for pure reinforcement learning! Checkout the example below!

Key Advantages

✅ Easy to use with Gymnasium or Mushroom-RL interface
✅ Many environments including humanoids and quadrupeds
✅ Diverse set of datasets --> e.g., noisy motion capture or ground truth datasets with actions
✅ Wide spectrum of difficulty levels
✅ Built-in domain randomization
✅ Many baseline algorithms for quick benchmarking
Documentation


Installation

You have the choice to install the latest release via PyPI by running

pip install loco-mujoco 

or you do an editable installation by cloning this repository and then running:

cd loco-mujoco
pip install -e . 

[!NOTE] We fixed the version of MuJoCo to 2.3.7 during installation since we found that there are slight differences in the simulation, which made testing very difficult. However, in practice, you can use any newer version of MuJoCo! Just install it after installing LocoMuJoCo.

[!NOTE] If you want to run the MyoSkeleton environment, you need to additionally run loco-mujoco-myomodel-init to accept the license and download the model. Finally, you need to upgrade Mujoco to 3.2.2 and dm_control to 1.0.22 after installing this package and downloading the datasets!

Download the Datasets

After installing LocoMuJoCo, new commands for downloading the datasets will be setup for you. You have the choice of downloading all datasets available or only the ones you need.

For example, to install all datasets run:

loco-mujoco-download

To install only the real (motion capture, no actions) datasets run:

loco-mujoco-download-real

To install only the perfect (ground-truth with actions) datasets run:

loco-mujoco-download-perfect

Installing the Baselines

If you also want to run the baselines, you have to install our imitation learning library imitation_lib. You find example files for training the baselines for any LocoMuJoCo task here.

First Test

To verify that everything is installed correctly, run the examples such as:

python examples/simple_mushroom_env/example_unitree_a1.py

To replay a dataset run:

python examples/replay_datasets/replay_Unitree.py

Environments & Tasks

You want a quick overview of all environments, tasks and datasets available? You can find it here and more detailed in the Documentation.

And stay tuned! There are many more to come ...


Quick Examples

LocoMuJoCo is very easy to use. Just choose and create the environment, and generate the dataset belonging to this task and you are ready to go!

import numpy as np
import loco_mujoco
import gymnasium as gym


env = gym.make("LocoMujoco", env_name="HumanoidTorque.run")
dataset = env.create_dataset()

You want to use LocoMuJoCo for pure reinforcement learning? No problem! Just define your custom reward function and pass it to the environment!

import numpy as np
import loco_mujoco
import gymnasium as gym
import numpy as np


def my_reward_function(state, action, next_state):
    return -np.mean(action)


env = gym.make("LocoMujoco", env_name="HumanoidTorque.run", reward_type="custom",
               reward_params=dict(reward_callback=my_reward_function))

LocoMuJoCo natively supports MushroomRL:

import numpy as np
from loco_mujoco import LocoEnv

env = LocoEnv.make("HumanoidTorque.run")
dataset = env.create_dataset()

You can find many more examples here.

Detailed Tutorials are given in the Documentation.


Citation

@inproceedings{alhafez2023b,
title={LocoMuJoCo: A Comprehensive Imitation Learning Benchmark for Locomotion},
author={Firas Al-Hafez and Guoping Zhao and Jan Peters and Davide Tateo},
booktitle={6th Robot Learning Workshop, NeurIPS},
year={2023}
}

Credits

Both Unitree models were taken from the MuJoCo menagerie

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

loco-mujoco-0.4.1.tar.gz (29.6 MB view details)

Uploaded Source

File details

Details for the file loco-mujoco-0.4.1.tar.gz.

File metadata

  • Download URL: loco-mujoco-0.4.1.tar.gz
  • Upload date:
  • Size: 29.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.4

File hashes

Hashes for loco-mujoco-0.4.1.tar.gz
Algorithm Hash digest
SHA256 44776fdd4e4d504a4dc933c90f0ccc68816e91efb5ffb2bcea7d06dbc5daa940
MD5 1055a4ca7639c6ec5c4d8b1fcb241b72
BLAKE2b-256 366bfe0ad7af1ebd12b246f4710aa21a5222bfaf4e0bcc31e1020908987867f0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page