Towards Embodied AI with MuscleMimic: Unlocking full-body musculoskeletal motor learning at scale
Project description
MuscleMimic: Unlocking full-body musculoskeletal motor learning at scale
MuscleMimic is a JAX-based motion imitation learning research benchmark specifically designed for biomechanically accurate muscle-actuated models. It focuses on advancing research in muscle-driven locomotion and manipulation through high-performance neural policy training. MuscleMimic addresses the computational challenges of training neural policies on complex biomechanical models by:
- Muscle-Actuated Dynamics: Specialized support for physiologically accurate muscle models with Hill-type dynamics
- JAX/MJWarp Acceleration: GPU-parallel training with up to 8,192 environments for rapid experimentation with collision support
- Single Generalist Policy: A centralized policy trained on diverse datasets to achieve high-dimensional coordination across multiple gait patterns and motions.
Key Features
- High-Performance Training: JAX JIT compilation with MuJoCo Warp backend acceleration
- Biomechanical Models: MyoBimanualArm and MyoFullBody
- Research-Ready: DeepMimic-style rewards with comprehensive validation metrics
- AMASS Integration: Automated retargeting of SMPL format motion dataset.
- GMR-FIT Retargeting: Improved of SOTA inverse kinematics for high quality imitation data.
Current Available Models
| Model | Type | Joints | Muscles | DoFs | Focus |
|---|---|---|---|---|---|
| MyoBimanualArm | Fixed-base | 76 (36*) | 126 (64*) | 54 (14*) | Upper-body manipulation |
| MyoFullBody | Free-root | 123 (83*) | 416 (354*) | 72 (32*) | Locomotion and manipulation |
$^*$ denotes configurations with finger muscles temporarily disabled.
- Muscle Actuation: Hill-type muscle models with physiological activation dynamics
- Site Tracking: Biomechanically relevant anatomical landmarks for reward computation
System Requirements
Depending on how you plan to use musclemimic, the requirements differ:
- Training: A Linux machine with an NVIDIA GPU is required.
- Inference & Evaluation: Both Linux and macOS are fully supported.
Quick Start
Preliminaries
# 1. Install UV (faster package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
# 2. Install dependencies
uv sync
For CUDA (Linux x86_64), install the CUDA-enabled JAX extra:
uv sync --extra cuda
Test with Demo Cache
[!TIP] Recommended for first-time users. Start here for the fastest path to a working setup.
No AMASS download needed! We provide pre-retargeted demo motions for both MyoArmBimanual and MyoFullBody via a gated Hugging Face dataset.
1. Authenticate with Hugging Face
The demo dataset is hosted on Hugging Face and requires access approval:
- Go to amathislab/demo_dataset and request access.
- Once approved, create an access token at huggingface.co/settings/tokens.
- Log in from the terminal:
uv run hf auth login
2. Download Demo Cache
uv run python -c "from musclemimic.utils.demo_cache import setup_demo_for_bimanual; setup_demo_for_bimanual()"
uv run python -c "from musclemimic.utils.demo_cache import setup_demo_for_myo_fullbody; setup_demo_for_myo_fullbody()"
You can start a short training with either model with:
uv run bimanual/experiment.py --config-name=conf_bimanual_demo
uv run fullbody/experiment.py --config-name=conf_fullbody_demo
Evaluate a Checkpoint (MuJoCo CPU, e.g. macOS on Apple silicon)
Examples below assume you have already downloaded the demo cache for MyoFullBody:
uv run python -c "from musclemimic.utils.demo_cache import setup_demo_for_myo_fullbody; setup_demo_for_myo_fullbody()"
By default, the provided training configs log to Weights & Biases with wandb.mode=online.
If you do not want to use W&B, disable it explicitly:
uv run bimanual/experiment.py --config-name=conf_bimanual_demo wandb.mode=disabled
uv run fullbody/experiment.py --config-name=conf_fullbody_demo wandb.mode=disabled
Run evaluation with the MuJoCo viewer (CPU), using demo motions.
On macOS, use mjpython for viewer-based MuJoCo commands. On Linux, a regular python entrypoint is sufficient:
uv run mjpython fullbody/eval.py \
--path hf://amathislab/mm-10m-2 \
--motion_path KIT/314/walking_medium09_poses \
--use_mujoco \
--stochastic \
--eval_seed 0 \
--n_steps 1000 \
--mujoco_viewer
uv run mjpython fullbody/eval.py \
--path hf://amathislab/mm-10m-2 \
--motion_path KIT/348/turn_right03_poses \
--use_mujoco \
--stochastic \
--eval_seed 0 \
--n_steps 1000 \
--mujoco_viewer
uv run mjpython fullbody/eval.py \
--path hf://amathislab/mm-10m-2 \
--motion_path KIT/4/WalkInCounterClockwiseCircle04_poses \
--use_mujoco \
--stochastic \
--eval_seed 0 \
--n_steps 1000 \
--mujoco_viewer
Retargeting with GMR-Fit
Musclemimic offers accurate retargeting to MyoFullBody and MyoBimanualArm based on General Motion Retargeting (GMR), but incorporates SMPL fitting instead of manually defined joint configurations on AMASS dataset. We offer you the retargeted dataset using GMR-Fit, as well as the pretrained checkpoint upon these motions on huggingface.
Hugging Face Resources
-
MyoBimanualArm
- Checkpoints: amathislab/mm-bimanual-v0
- Dataset: amathislab/musclemimic-bimanual-retargeted
-
MyoFullBody
- Checkpoints: amathislab/mm-fullbody-base
- Dataset: amathislab/musclemimic-retargeted
Pre-retargeted GMR caches can be accessed in several ways. If you want to control the local cache location explicitly, set it first:
uv run musclemimic-set-all-caches --path /path/to/converted_datasets
- Automatic download: set
retargeting_method: gmrin your config, and the required caches will be downloaded automatically. - Manual download from the CLI:
uv run musclemimic-download-gmr-caches --dataset-group KIT_KINESIS_TRAINING_MOTIONS
uv run musclemimic-download-gmr-caches --dataset-group AMASS_BIMANUAL_TRAIN_MOTIONS --env-name MyoBimanualArm
- Python API:
from musclemimic.utils import download_gmr_dataset_group
download_gmr_dataset_group("KIT_KINESIS_TRAINING_MOTIONS")
download_gmr_dataset_group(dataset_group="AMASS_BIMANUAL_TRAIN_MOTIONS", env_name="MyoBimanualArm")
Full Retargeting with AMASS
If you prefer to retarget your own dataset in batch. You may follow these steps to download the full dataset from AMASS and set up the retargeting pipeline.
1. Download AMASS
Register and download the AMASS dataset from AMASS. Place all datasets in a directory (e.g., /path/to/amass), such that the folder has the following structure:
/path/to/amass/
├── ACCAD/
├── ...
├── KIT/
│ ├── 1/
│ │ ├── LeftTurn03_poses.npz
│ │ └── ...
│ └── ...
├── ...
Install SMPL dependencies and the GMR source package
uv sync --extra smpl --group gmr
2. Download SMPL-H and MANO
Go to the MANO website. Register and download the following:
- Extended SMPL+H model (includes the SMPL-H model w/o hands).
- Models & Code (includes the hand models).
Extract the folders and place them in a directory (e.g., /path/to/smpl), such that the folder has the following structure:
/path/to/smpl/
├── mano_v1_2/
└── smplh/
3. Set Paths
Set the paths to the AMASS dataset, SMPL models, and specify a directory for the converted caches:
uv run musclemimic-set-amass-path --path /path/to/amass
uv run musclemimic-set-smpl-model-path --path /path/to/smpl
uv run musclemimic-set-all-caches --path /path/to/converted_datasets
These commands write user-specific settings to ~/.musclemimic/MUSCLEMIMIC_VARIABLES.yaml by default.
Set MUSCLEMIMIC_CONFIG_PATH if you want to use a different config file.
4. Convert SMPL-H and MANO to SMPLH_neutral.pkl
Run the conversion script to generate the SMPLH_neutral.pkl file needed for retargeting:
cd loco_mujoco/smpl
bash install_smplh.sh
5. Run the retargeting pipeline with your preferred dataset. We report the accuracy of retargeting with GMR on several dataset in our preprint.
uv run scripts/retarget_dataset.py --model MyoFullBody --retargeting-method gmr --dataset KIT_KINESIS_TRAINING_MOTIONS --workers 8
uv run scripts/retarget_dataset.py --model MyoBimanualArm --retargeting-method gmr --dataset AMASS_BIMANUAL_MARGINAL_MOTIONS --workers 8
Training and Finetuning from a Checkpoint
The following examples show how to resume from a pretrained checkpoint for either targeted finetuning or broader continued training.
Finetune on a Specific Motion
For targeted finetuning, we reset the policy standard deviation to 3 to encourage exploration on the new motion.
uv run fullbody/experiment.py --config-name=conf_fullbody_gmr_resnet \
experiment.resume_from="hf://amathislab/mm-fullbody-base" \
experiment.reset_std_on_resume=3 \
experiment.task_factory.params.amass_dataset_conf.dataset_group=null \
experiment.task_factory.params.amass_dataset_conf.rel_dataset_path='["KIT/200/Handstand01_poses"]'
Continue Training on a Larger Motion Set
To continue training on a broader motion distribution, resume from the same checkpoint and switch to the transition-augmented training set.
uv run fullbody/experiment.py --config-name=conf_fullbody_gmr_resnet \
experiment.resume_from="hf://amathislab/mm-fullbody-base" \
experiment.task_factory.params.amass_dataset_conf.dataset_group="KIT_KINESIS_TRANSITION_TRAINING_MOTIONS"
Visualization with Viser
You could use Viser for real-time policy visualization with muscle tendons.
Example:
# MyoBimanualArm visualization
uv run bimanual/eval.py \
--path outputs/YYYY-MM-DD/HH-MM-SS/checkpoints/XXXXXX/checkpoint_XXX \
--use_mujoco --viser_viewer
# MyoFullBody visualization
uv run fullbody/eval.py \
--path outputs/2025-10-12/09-32-55/checkpoints/2510120733/checkpoint_400 \
--use_mujoco --viser_viewer
Usage Notes
- Requires
--use_mujocoflag (Viser only works with CPU MuJoCo, not MJX)
Development
For contributor setup and review guidelines, see CONTRIBUTING.md.
Typical local workflow:
make install-dev
make precommit-install
make ci
pre-commit currently targets a curated subset of files while the repository is being migrated toward broader coverage. make lint and make format intentionally follow that same scoped set rather than reformatting the whole repository. Please keep changes to .pre-commit-config.yaml's files: allowlist in dedicated cleanup PRs rather than bundling them with functional changes.
Citation
If you use this code in your research, please cite:
@article{Li2026MuscleMimic,
title={Towards Embodied AI with MuscleMimic: Unlocking full-body musculoskeletal motor learning at scale},
author={Li, Chengkun and Wang, Cheryl and Ziliotto, Bianca and Simos, Merkourios and Kovecses, Jozsef and Durandau, Guillaume and Mathis, Alexander},
journal={arXiv preprint arXiv:2603.25544},
year={2026}
}
License
This project is licensed under the Apache License. See the LICENSE file for details.
Note that model checkpoints and data are licensed separately as indicated on the HuggingFace download pages.
This project will also require downloading additional third-party open-source software projects, please review each license terms accordingly before use.
Acknowledgments
Inspired by and built on:
- MyoSuite
- Mujoco_warp
- LocoMuJoCo
- SMPL-X - Body model for motion retargeting
- PureJaxRL
- MuJoCo Playground
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file musclemimic-0.1.0.tar.gz.
File metadata
- Download URL: musclemimic-0.1.0.tar.gz
- Upload date:
- Size: 405.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3caae45306552725f3e1d0102249ec3bbf5e4cf32c5b1b57451f904ee7e49d30
|
|
| MD5 |
568a8c5632b06a6e0e13b568e2a4154e
|
|
| BLAKE2b-256 |
d9353a85dda5aab5367625374746f73b0079cc546233d228ba98bde2488bb5a9
|
File details
Details for the file musclemimic-0.1.0-py3-none-any.whl.
File metadata
- Download URL: musclemimic-0.1.0-py3-none-any.whl
- Upload date:
- Size: 458.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5f4307ed2c9ca941b8e45e99c2734aa258ebb3b4d9355452a187cda49749332
|
|
| MD5 |
0bed90dc5a0a6c617749c3150202f273
|
|
| BLAKE2b-256 |
cb772aa368c36b79cbc0a86fc5f7239d358e4c03fcb5bc00b2835891913f0281
|