A JAX-native MuJoCo environment suite for Envrax.
Project description
Mujorax is a lightweight open-source JAX-native MuJoCo environment suite for single-agent Reinforcement Learning (RL), built on top of Envrax. It wraps MuJoCo Playground environments with Envrax's JaxEnv so you can use them with envrax.make, envrax.make_vec, and the rest of Envrax's tooling.
It comes with 25 environments from the DM Control Suite. All environment logic follows a stateless functional design that builds on top of the MJX, JAX, and Chex packages to benefit from JAX accelerator efficiency.
Why Mujorax?
Envrax provides a JAX-native Gymnasium-style API standard for RL environments, but it doesn't ship with any environments of its own. One of the biggest spaces in RL is robotics, and the gold-standard physics engine for this is MuJoCo. This makes it the perfect fit for one of the first Envrax environment suites!
MuJoCo Playground is Google DeepMind's open-source library of MuJoCo environments, built on top of MJX (MuJoCo's JAX port that preserves the simulator's full physics fidelity). It already solves the hard parts: research-validated reward and termination logic for DM Control, locomotion, and manipulation environments. The only catch is that its environments expose a Brax-style MjxEnv API, which doesn't quite fit Envrax's API standard.
Rather than reinventing the wheel, Mujorax acts as a thin, type-safe wrapper around the MuJoCo Playground environments to maximise their benefits while maintaining Envrax's API standard, making it completely plug-and-play with Envrax's toolkit.
Requirements
- Python 3.13+
- JAX 0.9+ (CPU, CUDA, or TPU backend)
Installation
pip install mujorax
Or with uv:
uv add mujorax
Quick Start
import jax
import mujorax # registers the suite at import
import envrax
env = envrax.make("mjx/cartpole_balance-v0")
obs, state = env.reset(jax.random.PRNGKey(0))
action = env.action_space.sample(jax.random.PRNGKey(1))
obs, state, reward, done, info = env.step(state, action)
Vectorised rollouts work the same way:
env = envrax.make_vec("mjx/cartpole_balance-v0", n_envs=128)
obs, state = env.reset(jax.random.PRNGKey(0)) # obs.shape == (128, 5)
You can also use make_multi to utilise several heterogeneous environments at once:
env = envrax.make_multi([
"mjx/cartpole_balance-v0",
"mjx/cheetah_run-v0",
])
obs_list, state_list = env.reset(jax.random.PRNGKey(0)) # one entry per env
Or, the make_multi_vec method for vectorised parallel copies of each environment:
env = envrax.make_multi_vec(
["mjx/cartpole_balance-v0", "mjx/cheetah_run-v0"],
n_envs=64,
)
obs_list, state_list = env.reset(jax.random.PRNGKey(0))
# each entries obs.shape == (64, *single_obs.shape)
Mujorax auto-detects whether a CUDA backend is available; on CPU-only systems it transparently falls back to MJX's pure-JAX physics implementation.
You can override this choice through this MjxPlaygroundConfig(config_overrides={"impl": ...}) if needed.
Environments
All environments share canonical IDs in the form mjx/<name>-v0. Here's the full list of supported environments:
| Canonical ID | Description |
|---|---|
mjx/acrobot_swingup-v0 |
Two-link underactuated pendulum; dense reward for swinging the tip to target |
mjx/acrobot_swingup_sparse-v0 |
Same as acrobot_swingup with a sparse (binary) reward |
mjx/ball_in_cup-v0 |
Planar ball-and-cup catching task; sparse reward when caught |
mjx/cartpole_balance-v0 |
Cart starts near upright; dense reward for keeping the pole upright |
mjx/cartpole_balance_sparse-v0 |
Same as cartpole_balance with a sparse reward |
mjx/cartpole_swingup-v0 |
Cart starts hanging; dense reward for swinging up and balancing |
mjx/cartpole_swingup_sparse-v0 |
Same as cartpole_swingup with a sparse reward |
mjx/cheetah_run-v0 |
Planar bipedal cheetah; dense reward proportional to forward speed |
mjx/finger_spin-v0 |
Two-DoF finger spinning a free body; dense reward for angular velocity |
mjx/finger_turn_easy-v0 |
Two-DoF finger rotating a body to a target with large tolerance |
mjx/finger_turn_hard-v0 |
Same as finger_turn_easy with a tighter tolerance |
mjx/fish_swim-v0 |
3D free-swimming fish; dense reward for swimming to a randomised target |
mjx/hopper_hop-v0 |
One-legged planar hopper; dense reward for forward speed |
mjx/hopper_stand-v0 |
One-legged hopper; dense reward for standing upright |
mjx/humanoid_run-v0 |
21-DoF humanoid; dense reward for matching a running speed |
mjx/humanoid_stand-v0 |
21-DoF humanoid; dense reward for standing upright |
mjx/humanoid_walk-v0 |
21-DoF humanoid; dense reward for matching a walking speed |
mjx/pendulum_swingup-v0 |
Single-link pendulum; dense reward for swinging up and balancing |
mjx/point_mass-v0 |
Planar point mass actuated in 2D; dense reward to a randomised target |
mjx/reacher_easy-v0 |
Two-link planar arm reaching a large target |
mjx/reacher_hard-v0 |
Same as reacher_easy with a smaller target |
mjx/swimmer_swimmer6-v0 |
Six-link planar swimmer; dense reward for the head reaching a target |
mjx/walker_run-v0 |
Planar bipedal walker; dense reward for running speed |
mjx/walker_stand-v0 |
Planar bipedal walker; dense reward for standing upright |
mjx/walker_walk-v0 |
Planar bipedal walker; dense reward for walking speed |
Acknowledgements
Mujorax wouldn't be possible without these incredible projects:
- MuJoCo Playground — the underlying environment implementations.
- MuJoCo and MJX — the physics engine and JAX bindings.
- Envrax — the registry and base environment API standard.
❤️ Thank you to all the developers involved - you guys are awesome! ❤️
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mujorax-0.1.0.tar.gz.
File metadata
- Download URL: mujorax-0.1.0.tar.gz
- Upload date:
- Size: 14.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.12 {"installer":{"name":"uv","version":"0.11.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
25372f3083b60a321fb890f4ba73777be6376171d9530a56e1dfe969c7354053
|
|
| MD5 |
40f17200c0b78a755a5b4f3e58d8f6d9
|
|
| BLAKE2b-256 |
82da650b3b1f1f2577b5edf26133ce583da3c9d7d2357c07a92d4b963c0beaff
|
File details
Details for the file mujorax-0.1.0-py3-none-any.whl.
File metadata
- Download URL: mujorax-0.1.0-py3-none-any.whl
- Upload date:
- Size: 15.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.12 {"installer":{"name":"uv","version":"0.11.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0e0c2a37548db3ac2aae967ab7c7efaee6f54b8a1e682760e304c032c8589743
|
|
| MD5 |
b9b1f5cec6d5fec0ece7cbf104049690
|
|
| BLAKE2b-256 |
8a2dac3d2da6345449a08b17bf4097c6bcd4500f3af4a9e11b84ce7fa10a87df
|