A framework for fast grid-based environments
Project description
PufferGrid
PufferGrid is a fast GridWorld engine for Reinforcement Learning implemented in Cython.
Features
- High-performance grid-based environments
- Customizable actions, events, and observations
- Easy integration with popular RL frameworks
Installation
You can install PufferGrid using pip or from source.
Using pip
The easiest way to install PufferGrid is using pip:
pip install puffergrid
From Source
To install PufferGrid from source, follow these steps:
-
Clone the repository:
git clone https://github.com/daveey/puffergrid.git cd puffergrid
-
Build and install the package:
python setup.py build_ext --inplace pip install -e .
Getting Started
The best way to understand how to create a PufferGrid environment is to look at a complete example. Check out the forage.pyx
file in the examples
directory for a full implementation of a foraging environment.
Below is a step-by-step walkthrough of creating a similar environment, explaining each component along the way.
Step 1: Define Game Objects
First, we'll define our game objects: Agent, Wall, and Tree.
from puffergrid.grid_object cimport GridObject
cdef struct AgentProps:
unsigned int energy
unsigned int orientation
ctypedef GridObject[AgentProps] Agent
cdef struct WallProps:
unsigned int hp
ctypedef GridObject[WallProps] Wall
cdef struct TreeProps:
char has_fruit
ctypedef GridObject[TreeProps] Tree
cdef enum ObjectType:
AgentT = 0
WallT = 1
TreeT = 2
Step 2: Define Actions
Next, we'll define the actions our agents can take: Move, Rotate, and Eat.
from puffergrid.action cimport ActionHandler, ActionArg
cdef class Move(ActionHandler):
cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):
# Implementation details...
cdef class Rotate(ActionHandler):
cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):
# Implementation details...
cdef class Eat(ActionHandler):
cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):
# Implementation details...
Step 3: Define Event Handlers
We'll create an event handler to reset trees after they've been eaten from.
from puffergrid.event cimport EventHandler, EventArg
cdef class ResetTreeHandler(EventHandler):
cdef void handle_event(self, GridObjectId obj_id, EventArg arg):
# Implementation details...
cdef enum Events:
ResetTree = 0
Step 4: Define Observation Encoder
Create an observation encoder to define what agents can observe in the environment.
from puffergrid.observation_encoder cimport ObservationEncoder
cdef class ObsEncoder(ObservationEncoder):
cdef encode(self, GridObjectBase *obj, int[:] obs):
# Implementation details...
cdef vector[string] feature_names(self):
return [
"agent", "agent:energy", "agent:orientation",
"wall", "tree", "tree:has_fruit"]
Step 5: Define The Environment
Finally, we'll put it all together in our Forage environment class.
from puffergrid.grid_env cimport GridEnv
cdef class Forage(GridEnv):
def __init__(self, int map_width=100, int map_height=100,
int num_agents=20, int num_walls=10, int num_trees=10):
GridEnv.__init__(
self,
map_width,
map_height,
0, # max_timestep
[ObjectType.AgentT, ObjectType.WallT, ObjectType.TreeT],
11, 11, # observation shape
ObsEncoder(),
[Move(), Rotate(), Eat()],
[ResetTreeHandler()]
)
# Initialize agents, walls, and trees
# Implementation details...
Step 6: Using the Environment
Now that we've defined our environment, we can use it in a reinforcement learning loop:
from puffergrid.wrappers.grid_env_wrapper import PufferGridEnv
# Create the Forage environment
c_env = Forage(map_width=100, map_height=100, num_agents=20, num_walls=10, num_trees=10)
# Wrap the environment with PufferGridEnv
env = PufferGridEnv(c_env, num_agents=20, max_timesteps=1000)
# Reset the environment
obs, _ = env.reset()
# Run a simple loop
for _ in range(1000):
actions = env.action_space.sample() # Random actions
obs, rewards, terminals, truncations, infos = env.step(actions)
if terminals.any() or truncations.any():
break
# Print final stats
print(env.get_episode_stats())
This example demonstrates the core components of creating a PufferGrid environment: defining objects, actions, events, observations, and putting them together in an environment class.
Performance Testing
To run performance tests on your PufferGrid environment, use the test_perf.py
script:
python test_perf.py --env examples.forage.Forage --num_agents 20 --duration 20
You can also run the script with profiling enabled:
python test_perf.py --env examples.forage.Forage --num_agents 20 --duration 20 --profile
Contributing
Contributions to PufferGrid are welcome! Please feel free to submit pull requests, create issues, or suggest improvements.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file puffergrid-0.0.8.tar.gz
.
File metadata
- Download URL: puffergrid-0.0.8.tar.gz
- Upload date:
- Size: 1.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.7 Darwin/23.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | eaf65f933611d1dde332e8769152e72a2b213ab5e5034dcf3e77532227b7b256 |
|
MD5 | 3a8630187922d035ac50e01dd7eb06f9 |
|
BLAKE2b-256 | 87bacc3ac29b8f98368bb6b7435261e273cac6e1bc007faad00137ac7d454867 |
File details
Details for the file puffergrid-0.0.8-py3-none-any.whl
.
File metadata
- Download URL: puffergrid-0.0.8-py3-none-any.whl
- Upload date:
- Size: 1.8 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.7 Darwin/23.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8577bc3520c23c5247b70c755be4a369197e5bfb22474e7734ba7397fe364bcc |
|
MD5 | 27917f790b4d35193540e67035dda71d |
|
BLAKE2b-256 | 52d124345e23d955cc6dd3e1125d206cd2154671956070918118fdb265844835 |