A reinforcement learning environment for simulating target-MDP environments (Airplane, Car, Bicycle), with gymnasium and gymnax support
Project description
🎯 TargetGym: Reinforcement Learning Environments for Target MDPs
TargetGym is a lightweight yet realistic collection of reinforcement learning environments designed around target MDPs — tasks where the objective is to reach and maintain a specific subset of states (target states).
Environments are built to be fast, parallelizable, and physics-based, enabling large-scale RL research while capturing the core challenges of real-world control systems such as delays, irrecoverable states, partial observability, and competing objectives.
Currently included environments:
- 🛩 Plane – control of a 2D Airbus A320-like aircraft - Stable-Target-MDP
- 🚗 Car – maintain a desired speed on a road - Stable-Target-MDP
- 🚲 Bike – stabilize and steer a 2D bicycle model - Unstable-Target-MDP (from Randlov et al.)
✨ Features
- ⚡ Fast & parallelizable with JAX — scale to thousands of parallel environments on GPU/TPU.
- 📐 Physics-based: Derived from modeling equations, not arcade physics.
- 🧪 Reliable: Unit-tested for stability and reproducibility.
- 🎯 Target MDP focus: Each task is about reaching and maintaining target states.
- 🌀 Challenging dynamics: Captures irrecoverable states, and momentum effects.
- 🔄 Compatible with RL libraries: Offers Gymnax and Gymnasium interfaces.
- 🌟 Upcoming features: Environmental perturbations (wind, turbulence, bumpy road) and fuel consumption.
📊 Example: Stable Altitude in Plane
Below is an example of how stable altitude changes with engine power and pitch in the Plane environment:
This illustrates multi-stability: with fixed power and pitch, the aircraft naturally converges to a stable altitude. Similar properties can be found in Car environment
🚀 Installation
Once released on PyPI, install with:
# Using pip
pip install target-gym
# Or with Poetry
poetry add target-gym
🎮 Usage
Here’s a minimal example of running an episode in the Plane environment and saving a video:
from target_gym import Plane, PlaneParams
# Create env
env = Plane()
seed = 42
env_params = PlaneParams(max_steps_in_episode=1_000)
# Simple constant policy with 80% power and 0° stick input
action = (0.8, 0.0)
# Save the video
env.save_video(lambda o: action, seed, folder="videos", episode_index=0, params=env_params, format="gif")
Or train an agent using your favorite RL library (example with stable-baselines3):
from target_gym import PlaneGymnasium, PlaneParams
from stable_baselines3 import SAC
env = PlaneGymnasium()
model = SAC("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=10_000, log_interval=4)
model.save("sac_plane")
obs, info = env.reset()
while True:
action, _states = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
break
🧩 Challenges Modeled
TargetGym tasks are designed to expose RL agents to realistic control challenges:
- ⏳ Delays: Inputs (like engine power) take time to fully apply.
- 👀 Partial observability: Some parts of the state cannot be directly measured.
- 🏁 Competing objectives: Reach the target state quickly while minimizing overshoot or cost.
- 🌀 Momentum effects: Physical inertia delays control effectiveness.
- ⚠️ Irrecoverable states: Certain trajectories inevitably lead to failure.
📦 Roadmap
- Add perturbations (wind, turbulence, uneven terrain) for non-stationary dynamics.
- Easier interface for creating partially-observable variants.
- Provide benchmark results for popular RL baselines.
- Add fuel consumption and resource constraints.
- Add more tasks.
🤝 Contributing
Contributions are welcome! Open an issue or PR if you have suggestions, bug reports, or new features.
📖 Citation
If you use TargetGym in your research or project, please cite it as:
@misc{targetgym2025,
title = {TargetGym: Reinforcement Learning Environments for Target MDPs},
author = {Yann Berthelot},
year = {2025},
url = {https://github.com/YannBerthelot/TargetGym},
note = {Lightweight physics-based RL environments for aircraft, car, and bike control}
}
📜 License
MIT License – free to use in research and projects.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file target_gym-0.2.0.tar.gz.
File metadata
- Download URL: target_gym-0.2.0.tar.gz
- Upload date:
- Size: 37.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.11 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
19fe0412a458935d29cd52423c2e2a17d74aed1098390fc6cab82087474e4a6e
|
|
| MD5 |
947c5dada40f2df27f312873d8c30e58
|
|
| BLAKE2b-256 |
12becae54fb31ec07011e156bb6a4e04991246cb3a805a0330a2fb0e52b7e1e7
|
File details
Details for the file target_gym-0.2.0-py3-none-any.whl.
File metadata
- Download URL: target_gym-0.2.0-py3-none-any.whl
- Upload date:
- Size: 46.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.11 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
61fa7b1e763ca7ba710df4b09a87dfa267719b274a4d97eafac4be3bf3cb490a
|
|
| MD5 |
17fa158f10f17c4f85710701a501e084
|
|
| BLAKE2b-256 |
97c97a7f17203bfa37d026379609e0cf23c8dddf6bc4af5cd0096e071d51118b
|