Skip to main content

A reinforcement learning environment for simulating target-MDP environments (Airplane, Car, Process Control), with gymnasium and gymnax support

Project description

TargetGym: Reinforcement Learning Environments for Target MDPs

TargetGym is a lightweight yet realistic collection of reinforcement learning environments designed around target MDPs -- tasks where the objective is to reach and maintain a specific subset of states (target states).

Environments are built to be fast, parallelizable, and physics-based, enabling large-scale RL research while capturing the core challenges of real-world control systems such as delays, irrecoverable states, partial observability, and competing objectives.


Environments

Physical Control

Environment Goal Action Dim Obs Dim Steps/s (CPU, 10^8 steps)
Plane 2D Reach and hold a target altitude with an A320-like aircraft 2 (power, stick) 9 ~0.54M
Plane 3D -- Heading Reach and hold a target altitude and heading 3 (power, stick, aileron) 15 --
Plane 3D -- Circle Maintain altitude while orbiting a circular path 3 (power, stick, aileron) 17 --
Plane 3D -- Figure Eight Follow a 3D twisted lemniscate (figure-8 with altitude crossovers) 3 (power, stick, aileron) 19 --

Process Control (adapted from PC-gym)

Environment Goal Action Dim Obs Dim Steps/s (CPU, 10^8 steps)
CSTR Control coolant temperature to keep reactant concentration at a target 1 (coolant temp) 3 ~1.49M
First Order System Drive a first-order lag system to a target setpoint 1 (input) 2 --
Four Tank Control water levels in two lower tanks via two pumps in a coupled four-tank network 2 (pump voltages) 6 --

Industrial / Energy

Environment Goal Action Dim Obs Dim Steps/s (CPU, 10^8 steps)
Glass Furnace Track a crown temperature schedule in a 3-zone glass melting furnace 1 (fuel flow) 3 --
Nuclear Reactor Control neutron power via rod reactivity in a PWR with xenon dynamics 1 (rod reactivity) 4 --

Complexity Classification

Environments are designed to span a wide range of difficulty, making TargetGym suitable both as an RL benchmark suite and as a curriculum. Complexity is assessed from two angles: dynamics (linearity, coupling, stiffness) and RL difficulty (state/action dimensionality, horizon length, reward shaping, partial observability).

Tier Environment Obs Dim Action Dim Dynamics Key RL Challenges
1 -- Trivial First Order System 2 1 Linear SISO Baseline sanity-check
2 -- Medium CSTR 3 1 Nonlinear SISO Exponential Arrhenius kinetics, stiff dynamics, exothermic runaway risk
3 -- Hard Four Tank 6 2 Nonlinear MIMO Multi-objective, cross-coupled inputs, square-root outflow
4 -- Very Hard Plane 2D 9 2 2D aerodynamics Coupled nonlinear aerodynamics, very long horizon (10 000 steps)
4 -- Very Hard Glass Furnace 3 1 Nonlinear radiation (T^4) Partial observability (5/8 states hidden), multi-hour transients, schedule tracking
4 -- Very Hard Nuclear Reactor 4 1 Stiff multi-timescale Partial observability (7/11 states hidden), xenon memory trap, 86k-step horizon
5 -- Extreme Plane 3D -- Heading 15 3 3D aerodynamics Multi-objective (altitude + heading), roll/pitch/yaw coupling
5 -- Extreme Plane 3D -- Circle 17 3 3D + path following Sustained coordinated banked turns, km-scale circular path
6 -- Extreme+ Plane 3D -- Figure Eight 19 3 3D + twisted lemniscate 3D path with altitude crossovers, direction reversal, MPC >> PID gap

Figures


Plane -- altitude under constant power/stick inputs

CSTR -- reactor temperature under constant coolant temperatures

First Order System -- state trajectories under constant inputs

Four Tank -- tank 1 level under varying pump voltages

Videos


Plane 2D (PID)

CSTR (PID)

First Order System (PID)

Four Tank (PID)

Nuclear Reactor (PID)

Expert Baselines

Every environment ships with two expert baselines for comparison and for bootstrapping RL exploration:

  • PID: Relay-autotuned or gradient-tuned PID controllers. Gain-scheduled variants adapt to different operating points. For 3D plane tasks, specialized multi-loop PIDs handle altitude + heading/circle/figure-8 tracking.
  • MPC: Model Predictive Control via CasADi/IPOPT (process control envs) or gradient-based MPC through JAX autodiff (plane, car). Provides near-optimal performance but is too slow for real-time use.

The PID controllers, while suboptimal, provide useful expert demonstrations for expert-guided RL methods (behavior cloning warm-start, demo-augmented replay, DAgger). The performance gap between PID and MPC (especially on Plane 3D Figure-8: PID ~15 vs MPC ~560) demonstrates that these tasks genuinely require learned multi-axis coordination.


Features

  • Fast & parallelizable with JAX -- scale to thousands of parallel environments on GPU/TPU.
  • Physics-based: Derived from modeling equations, not arcade physics.
  • Reliable: Unit-tested for stability and reproducibility.
  • Target MDP focus: Each task is about reaching and maintaining target states.
  • Expert baselines: PID and MPC controllers for every environment.
  • Challenging dynamics: Captures irrecoverable states, partial observability, and momentum effects.
  • Visualization: All environments come with rendering and video generation.
  • Compatible with RL libraries: Offers Gymnax and Gymnasium interfaces.

Installation

Once released on PyPI, install with:

# Using pip
pip install target-gym

# Or with Poetry
poetry add target-gym

Usage

Here's a minimal example of running an episode in the Plane environment and saving a video:

from target_gym import Plane, PlaneParams

# Create env
env = Plane()
seed = 42
env_params = PlaneParams(max_steps_in_episode=1_000)

# Simple constant policy with 80% power and 0 deg stick input
action = (0.8, 0.0)

# Save the video
env.save_video(lambda o: action, seed, folder="videos", episode_index=0, params=env_params, format="gif")

Or train an agent using your favorite RL library (example with stable-baselines3):

from target_gym import GymnasiumPlane
from stable_baselines3 import SAC

env = GymnasiumPlane()
model = SAC("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=10_000, log_interval=4)
model.save("sac_plane")

obs, info = env.reset()
while True:
    action, _states = model.predict(obs, deterministic=True)
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        break

Challenges Modeled

TargetGym tasks are designed to expose RL agents to realistic control challenges:

  • Delays: Inputs (like engine power) take time to fully apply.
  • Partial observability: Some parts of the state cannot be directly measured (glass furnace, reactor).
  • Competing objectives: Reach the target state quickly while minimizing overshoot or cost.
  • Momentum effects: Physical inertia delays control effectiveness.
  • Irrecoverable states: Certain trajectories inevitably lead to failure (crash, runaway).
  • Multi-timescale dynamics: From millisecond neutronics to hour-long xenon transients (reactor).
  • Non-stationarity: Introduce perturbations in the environments.

Roadmap

  • Add perturbations (wind, turbulence) for non-stationary dynamics.
  • Provide benchmark results for popular RL baselines.
  • Mature glass furnace and reactor environments (shorter episodes, better reward shaping).
  • Add random orientation variations to circle and heading tasks.

Contributing

Contributions are welcome! Open an issue or PR if you have suggestions, bug reports, or new features.

For development you need to install the dev dependencies, which include test, lint and agent dependencies.

git clone https://github.com/YannBerthelot/TargetGym.git
cd TargetGym

# Using Poetry (recommended)
poetry install --with dev

# Using pip
python -m pip install -e ".[dev]"

Citation

If you use TargetGym in your research or project, please cite it as:

@misc{targetgym2025,
  title        = {TargetGym: Reinforcement Learning Environments for Target MDPs},
  author       = {Yann Berthelot},
  year         = {2025},
  url          = {https://github.com/YannBerthelot/TargetGym},
  note         = {Lightweight physics-based RL environments for aircraft, process control, and industrial systems}
}

License

MIT License -- free to use in research and projects.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

target_gym-0.4.0.tar.gz (125.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

target_gym-0.4.0-py3-none-any.whl (159.3 kB view details)

Uploaded Python 3

File details

Details for the file target_gym-0.4.0.tar.gz.

File metadata

  • Download URL: target_gym-0.4.0.tar.gz
  • Upload date:
  • Size: 125.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.0 CPython/3.12.13 Linux/6.17.0-1010-azure

File hashes

Hashes for target_gym-0.4.0.tar.gz
Algorithm Hash digest
SHA256 edff7a8fc70c375572c18d061cb44308e30926cbe52dafab55933086e3fdf6af
MD5 adda5ae3f1996d069a9a3db245cbb12d
BLAKE2b-256 d44045000bb705cb24d2db971f88ff61dd849e06b96981b83ef56776a8f38580

See more details on using hashes here.

File details

Details for the file target_gym-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: target_gym-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 159.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.0 CPython/3.12.13 Linux/6.17.0-1010-azure

File hashes

Hashes for target_gym-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 81c64f59e8a0d01eefeb550fa02a08799d8086ba1513bb3ee1f7c23dff74827f
MD5 aa48ed032607fad640175d15fe08e2a1
BLAKE2b-256 d6446303c5d168528f7034b35b5fb53028bac73c57f72997d663cc104adc7a11

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page