Simple Gridworld Environment for Gymnasium
Project description
SimpleGrid: Simple Grid Environment for Gymnasium
SimpleGrid is a super simple grid environment for Gymnasium (formerly OpenAI gym). It is easy to use and customise and it is intended to offer an environment for quickly testing and prototyping different Reinforcement Learning algorithms.
It is also efficient, lightweight and has few dependencies (gymnasium, numpy, matplotlib).
SimpleGrid involves navigating a grid from a Start (red tile) to a Goal (green tile) state without colliding with any Wall (black tiles) by walking over the Empty (white tiles) cells. The yellow circle denotes the agent's current position.
Key Features
- Gymnasium v0.26+ Ready: Supports the
terminated,truncatedAPI and proper seeding. - Decoupled Architecture: Logic, Rendering, and Map Parsing are handled by specialized classes.
- High Performance: Vectorized map parsing and coordinate logic using NumPy.
- Minimal Dependencies: Only
gymnasium,numpy, andmatplotlib.
Installation
Install via pip:
pip install gym-simplegrid
Or for development (editable install):
git clone https://github.com/damat-le/gym-simplegrid.git
cd gym-simplegrid
pip install -e .
Getting Started
Basic Usage
import gymnasium as gym
import gym_simplegrid
# Define custom map (optional)
obstacle_map = [
"00001",
"00100",
"00010",
]
# Create environment using the custom map
# Note: there are also pre-registered maps like 'SimpleGrid-4x4-v0', 'SimpleGrid-8x8-v0', etc.
env = gym.make(
'SimpleGrid-v0',
obstacle_map=obstacle_map,
render_mode='human'
)
# Reset with options
# options can specify 'start_loc' and 'goal_loc' as int or (row, col)
obs, info = env.reset(
seed=42,
options={'start_loc': 0, 'goal_loc': (2, 4)}
)
# Action-Perception Loop
for _ in range(50):
action = env.action_space.sample()
# Gymnasium returns 5 values
obs, reward, terminated, truncated, info = env.step(action)
# Manual render call (allows for better performance control)
env.render()
if terminated or truncated:
break
env.close()
Environment Description
Action Space
The action space is gymnasium.spaces.Discrete(4). An action is a int number and represents a direction according to the following scheme:
- 0: UP
- 1: DOWN
- 2: LEFT
- 3: RIGHT
Observation Space
Assume to have an environment of size (nrow, ncol), then the observation space is gymnasium.spaces.Discrete(nrow * ncol). Hence, an observation is an integer from 0 to nrow * ncol - 1 and represents the agent's current position. We can convert an observation s to a tuple (x,y) using the following formulae:
x = s // ncol # integer division
y = s % ncol # modulo operation
For example: let nrow=4, ncol=5 and let s=11. Then x=11//5=2 and y=11%5=1.
Viceversa, we can convert a tuple (x,y) to an observation s using the following formulae:
s = x * ncol + y
For example: let nrow=4, ncol=5 and let x=2, y=1. Then s=2*5+1=11.
Environment Dynamics
In the current implementation, the episodes terminates only when the agent reaches the goal state or it is truncated if the maximum number of steps (when provided) is exceeded. In case the agent takes a non-valid action (e.g. it tries to walk over a wall or exit the grid), the agent stays in the same position and receives a negative reward.
It is possible to subclass the SimpleGridEnv class and to override the step() method to define custom dynamics (e.g. truncate the episode if the agent takes a non-valid action).
Rewards
Currently, the reward map is defined in the get_reward() method of the SimpleGridEnv class.
For a given position (x,y), the default reward function is defined as follows:
def get_reward(
self,
xy: tuple[int, int],
) -> float:
"""
Logic for reward calculation. Overload this to change behavior.
"""
if not self._is_valid_xy(xy):
return -1.0 # Penalty for invalid move
if xy == self.goal_xy:
return 1.0 # Reward for reaching the goal
return -0.1 # Step penalty to encourage shorter paths
It is possible to subclass the SimpleGridEnv class and to override this method to define custom rewards.
Notes on Rendering
- Passive Rendering: The environment does not render automatically inside
step(). You must callenv.render()explicitly. This improves training speed significantly when rendering is not needed. - Modes:
human: Live Matplotlib window.rgb_array: Returns a NumPy array of pixels.ansi: Returns a CSV-style string of the current state.
Citation
@misc{gym_simplegrid,
author = {Leo D'Amato},
title = {SimpleGrid: Simple Grid Environment for Gymnasium},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/damat-le/gym-simplegrid}},
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gym_simplegrid-1.1.0.tar.gz.
File metadata
- Download URL: gym_simplegrid-1.1.0.tar.gz
- Upload date:
- Size: 18.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ce7a1eefe579444b491fa18cfd8ff75c01f219984bc209aa332f771c67a73482
|
|
| MD5 |
2f0edbe7e88c6be550074caecbad11ea
|
|
| BLAKE2b-256 |
c57a28e828357e08984c70f194f0c106db457008f4e72fa88233f7fbfbbce6b3
|
File details
Details for the file gym_simplegrid-1.1.0-py3-none-any.whl.
File metadata
- Download URL: gym_simplegrid-1.1.0-py3-none-any.whl
- Upload date:
- Size: 15.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f1140db20f1a0c9dce554214ca3eeb53891ed13041d923371686de3c9f6a4c84
|
|
| MD5 |
318ab75c225dac426bf49a0a997a4ae7
|
|
| BLAKE2b-256 |
a1fb98b72f9ff114050acc2cbbe3095781f21c9a1bd6ac253ce5da056feed8c6
|