Plangym is an interface to use gymnasium for planning problems. It extends the standard interface to allow setting and recovering the environment states.
Project description
Plangym
Plangym is a Python library that extends Gymnasium environments for planning algorithms. It provides the ability to get and set complete environment state, enabling deterministic rollouts from arbitrary states—critical for planning algorithms that need to branch execution.
Key Features
- State manipulation:
get_state()andset_state()for full environment state control - Batch stepping: Execute multiple state-action pairs in a single call
- Parallel execution: Built-in multiprocessing and Ray support for distributed rollouts
- Gymnasium compatible: Works with
gym.Wrappersand standard Gym API - Delayed initialization: Serialize environments before setup for distributed workers
Table of Contents
- Supported Environments
- Requirements
- Installation
- Quick Start
- Developer Guide
- Local CI with act
- Architecture
- License
- Contributing
Supported Environments
| Environment Type | Package | Description |
|---|---|---|
| Classic Control | gymnasium |
CartPole, Pendulum, MountainCar, etc. |
| Box2D | gymnasium[box2d] |
LunarLander, BipedalWalker, CarRacing |
| Atari | ale-py |
Atari 2600 games via Arcade Learning Environment |
| dm_control | dm-control |
DeepMind Control Suite with MuJoCo physics |
| MuJoCo | mujoco |
MuJoCo physics environments |
| Retro | stable-retro |
Classic console games (Genesis, SNES, etc.) |
| NES | nes-py |
NES games including Super Mario Bros |
Requirements
Python Version
- Python 3.10 or higher
System Dependencies
Ubuntu / Debian
# Install all system dependencies for headless rendering (EGL, GLU, X11)
make install-system-deps
# Or manually:
sudo apt-get update
sudo apt-get install -y xvfb libglu1-mesa libegl1-mesa-dev libgl1-mesa-glx x11-utils
For NES environments (nes-py):
sudo apt-get install -y build-essential clang libstdc++-10-dev
macOS
brew install --cask xquartz
brew install swig libzip
# Create X11 socket directory if needed
if [ ! -d /tmp/.X11-unix ]; then
sudo mkdir /tmp/.X11-unix
sudo chmod 1777 /tmp/.X11-unix
sudo chown root /tmp/.X11-unix
fi
WSL2 (Windows)
# Install all system dependencies for headless rendering (EGL, GLU, X11)
make install-system-deps
# Or manually:
sudo apt-get update
sudo apt-get install -y xvfb libglu1-mesa libegl1-mesa-dev libgl1-mesa-glx x11-utils
For GUI rendering, install an X server on Windows (e.g., VcXsrv) or use headless mode.
Installation
Quick Install
# Using pip
pip install plangym
# Using uv
uv add plangym
Install with Optional Extras
Plangym provides optional extras for different environment types:
| Extra | Description | Includes |
|---|---|---|
atari |
Atari 2600 games | ale-py, gymnasium[atari] |
nes |
NES / Super Mario | nes-py, gym-super-mario-bros |
classic-control |
Classic control envs | gymnasium[classic_control], pygame |
dm_control |
DeepMind Control Suite | mujoco, dm-control |
retro |
Retro console games | stable-retro |
box_2d |
Box2D physics | box2d-py |
ray |
Distributed computing | ray |
jupyter |
Notebook support | jupyterlab |
# Install specific extras
pip install "plangym[atari,dm_control]"
# Install all environment extras
pip install "plangym[atari,nes,classic-control,dm_control,retro,box_2d,ray]"
Development Installation
git clone https://github.com/FragileTech/plangym.git
cd plangym
uv sync --all-extras
ROM Installation
For Retro environments, you need to import ROM files:
# Retro ROMs (requires ROM files)
python -m plangym.scripts.import_retro_roms
Note: Atari ROMs are now bundled with ale-py >= 0.9, so no additional installation is needed for Atari environments.
Quick Start
Basic Environment Stepping
import plangym
env = plangym.make(name="CartPole-v1")
state, obs, info = env.reset()
# Save state for later
saved_state = state.copy()
# Take a step
action = env.action_space.sample()
new_state, obs, reward, terminated, truncated, info = env.step(state=state, action=action)
# Restore to saved state and try a different action
different_action = env.action_space.sample()
new_state2, obs2, reward2, _, _, _ = env.step(state=saved_state, action=different_action)
Batch Stepping
Execute multiple state-action pairs efficiently:
import plangym
env = plangym.make(name="CartPole-v1")
state, obs, info = env.reset()
# Create batch of states and actions
states = [state.copy() for _ in range(10)]
actions = [env.action_space.sample() for _ in range(10)]
# Step all at once
new_states, observations, rewards, terminateds, truncateds, infos = env.step_batch(
states=states,
actions=actions
)
Parallel Execution
Use multiple workers for faster rollouts:
import plangym
# Create environment with 4 parallel workers
env = plangym.make(name="ALE/MsPacman-v5", n_workers=4)
state, obs, info = env.reset()
states = [state.copy() for _ in range(100)]
actions = [env.action_space.sample() for _ in range(100)]
# Steps are distributed across workers
new_states, observations, rewards, terminateds, truncateds, infos = env.step_batch(
states=states,
actions=actions
)
Developer Guide
Development Setup
git clone https://github.com/FragileTech/plangym.git
cd plangym
uv sync --all-extras
Code Style
Plangym uses Ruff for linting and formatting.
# Auto-fix and format code
make style
# Check code style (no modifications)
make check
Running Tests
# Run full test suite
make test
# Run tests in parallel (default: 2 workers)
make test-parallel
# Run tests with custom worker count
n=4 make test-parallel
# Run classic control tests (single-threaded)
make test-singlecore
# Run doctests
make test-doctest
Running individual test files:
# dm_control tests (requires MUJOCO_GL for headless rendering)
MUJOCO_GL=egl uv run pytest tests/control/test_dm_control.py -s
# Specific test
uv run pytest tests/test_core.py::TestCoreEnv::test_step -v
Environment Variables:
| Variable | Description |
|---|---|
MUJOCO_GL=egl |
Headless MuJoCo rendering |
PYVIRTUALDISPLAY_DISPLAYFD=0 |
Virtual display for rendering tests |
SKIP_CLASSIC_CONTROL=1 |
Skip classic control in parallel runs |
SKIP_RENDER=True |
Skip rendering tests |
n=2 |
Number of parallel test workers |
Code Coverage
# Run all coverage targets
make codecov
# Individual coverage targets
make codecov-parallel # Parallel tests
make codecov-singlecore # Single-core tests
Building Documentation
# Build Sphinx documentation
make build-docs
# Serve documentation locally
make serve-docs
Docker
# Build Docker image
make docker-build
# Run interactive shell in container
make docker-shell
# Run tests in Docker
make docker-test
# Run Jupyter notebook server
make docker-notebook
Local CI with act
act allows you to run GitHub Actions workflows locally for debugging.
Prerequisites
-
Docker installed and running
- For WSL2: Enable Docker Desktop WSL Integration in Settings → Resources → WSL Integration
-
act installed:
# macOS brew install act # Linux (using Go) go install github.com/nektos/act@latest # Or download from GitHub releases
Configuration
Plangym includes pre-configured act settings:
.actrc- Default act configuration.secrets- Local secrets file (gitignored)
Running Workflows Locally
# List all available jobs
act -l
# Run specific jobs
act -j style-check # Lint check
act -j pytest # Run tests
act -j build-test-package # Build and test package
# Dry run (see what would execute)
act -n
# Run with verbose output
act -j style-check -v
Secrets Setup
Edit .secrets to add your credentials for full CI functionality:
# .secrets file format
ROM_PASSWORD=your_rom_password
CODECOV_TOKEN=your_codecov_token
TEST_PYPI_PASS=your_test_pypi_token
BOT_AUTH_TOKEN=your_github_bot_token
PYPI_PASS=your_pypi_token
Note: The
.secretsfile is gitignored and should never be committed.
Troubleshooting act
Docker not found in WSL2
Enable WSL integration in Docker Desktop:
- Open Docker Desktop
- Go to Settings → Resources → WSL Integration
- Enable integration for your WSL distro
- Restart Docker Desktop
Job runs but fails on specific actions
Some GitHub Actions may not work perfectly with act. Common issues:
actions/cache- May need--reuseflag- Platform-specific steps - act only runs Linux containers
- Service containers - May require additional configuration
Architecture
plangym/
├── core.py # PlanEnv, PlangymEnv base classes
├── registry.py # make() factory function
├── control/ # Physics environments
│ ├── classic_control.py
│ ├── dm_control.py
│ ├── mujoco.py
│ └── box2d.py
├── videogames/ # Emulator environments
│ ├── atari.py
│ ├── retro.py
│ └── nes.py
└── vectorization/ # Parallel execution
├── env.py # VectorizedEnv base
├── parallel.py # Multiprocessing
└── ray.py # Ray distributed
Core Classes
| Class | Description |
|---|---|
PlanEnv |
Abstract base defining get_state(), set_state(), step() interface |
PlangymEnv |
Wraps Gymnasium environments with state manipulation |
VectorizedEnv |
Base for parallel execution backends |
ParallelEnv |
Multiprocessing-based parallel stepping |
RayEnv |
Ray-based distributed stepping |
Entry Point
import plangym
# The make() function routes to the correct environment class
env = plangym.make(
name="CartPole-v1", # Environment name
n_workers=4, # Parallel workers (optional)
obs_type="rgb", # Observation type: coords, rgb, grayscale
delay_setup=True, # Defer initialization for serialization
)
License
Plangym is released under the MIT License.
Contributing
Contributions are welcome! Please read our Contributing Guidelines before submitting a pull request.
Quick contribution checklist:
- Run
make checkto verify code style - Run
make testto ensure tests pass - Add tests for new functionality
- Update documentation as needed
For bug reports and feature requests, please open an issue.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file plangym-0.1.32.tar.gz.
File metadata
- Download URL: plangym-0.1.32.tar.gz
- Upload date:
- Size: 61.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7a29e2664c1144cca6cc0a301244a33c1ce30fbfa86140e52bb86d27eb753f90
|
|
| MD5 |
e1567be507e34aa68b38de03dbcae6db
|
|
| BLAKE2b-256 |
4e22e3a3cabbe7cce545654de3de61949d581dfa86dd80eee663ccf216795bd3
|
File details
Details for the file plangym-0.1.32-py3-none-any.whl.
File metadata
- Download URL: plangym-0.1.32-py3-none-any.whl
- Upload date:
- Size: 69.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
842c1a817bef86a46a1b0a687807d90e64d9b2e37da9df584c34a787a40500d3
|
|
| MD5 |
deab69b7e6b0cc109197beb8f6ce44a5
|
|
| BLAKE2b-256 |
f8935ab52e60afb0c7a47a70283d90397f4643969cd52ea8992fde8a0efc1a7d
|