A library for planning and reinforcement learning research in partially observable, multi-agent environments.
Project description
POSGGym
POSGGym is a Python library for planning and reinforcement learning research in partially observable, multi-agent environments. It provides a collection of discrete and continuous environments along with reference agents to allow for reproducible evaluations. The API aims to mimic that of Gymnasium and PettingZoo with the addition of a model API that can be used for planning.
The documentation for the project is available online at posggym.readthedocs.io/. For a guide to building the documentation locally see docs/README.md.
Some baseline implementations of planning and reinforcement learning algorithms for POSGGym are available in the POSGGym-Baselines library. Compatibility with other popular reinforcement learning libraries is possible using the PettingZoo wrapper.
Installation
POSGGym supports and tests for Python>=3.8
. We recommend using a virtual environment to install POSGGym (e.g. conda, venv).
Using pip
The latest release version of POSGGym can be installed using pip
by running:
pip install posggym
This will install the base dependencies for running all the environments and download the agent models (so may take a few minutes). In order to minimise the number of unused dependencies installed the default install does not include dependencies for running many posggym agents (specifically PyTorch).
You can install dependencies for POSGGym agents using pip install posggym[agents]
or to install dependencies for all environments and agents use pip install posggym[all]
.
Installing from source
To install POSGGym from source, first clone the repository then run:
cd posggym
pip install -e .
This will install the base dependencies and download the agent models (so may take a few minutes). You can optionally install extras as described above. E.g. to install all dependencies for all environments and agents use:
pip install -e .[all]
To run tests, install the test dependencies and then run the tests:
pip install -e .[testing]
pytest
Or alternatively you can run one of the examples from the examples
directory:
python examples/run_random_agents.py --env_id Driving-v1 --num_episodes 10 --render_mode human
Environments
POSGGym includes the following families of environments (for a full list of environments and their descriptsion see the documentation).
- Classic - These are classic POSG problems from the literature.
- Grid-World - These environments are all based in a 2D Gridworld.
- Continuous - 2D environments with continuous state, actions, and observations.
Environment API
POSGGym models each environment as a python Env
class. Creating environment instances and interacting with them is very simple, and flows almost identically to the Gymnasium user flow. Here's an example using the PredatorPrey-v0
environment:
import posggym
env = posggym.make("PredatorPrey-v0")
observations, infos = env.reset(seed=42)
for t in range(100):
env.render()
actions = {i: env.action_spaces[i].sample() for i in env.agents}
observations, rewards, terminations, truncations, all_done, infos = env.step(actions)
if all_done:
observations, infos = env.reset()
env.close()
Model API
Every environment provides access to a model of the environment in the form of a POSGModel
class. Each model implements a generative model, which can be used for planning, along with functions for sampling initial states. Some environments also implement a full POSG model including the transition, joint observation and joint reward functions.
The following is an example of accessing and using the environment model:
import posggym
env = posggym.make("PredatorPrey-v0")
model = env.model
model.seed(seed=42)
state = model.sample_initial_state()
observations = model.sample_initial_obs(state)
for t in range(100):
actions = {i: model.action_spaces[i].sample() for i in model.get_agents(state)}
state, observations, rewards, terminations, truncations, all_done, infos = model.step(state, actions)
if all_done:
state = model.sample_initial_state()
observations = model.sample_initial_obs(state)
The base model API is very similar to the environment API. The key difference that all methods are stateless so can be repeatedly sampled for planning. Indeed the Env
class implementations for the built-in environments are a wrapper over an underlying POSGModel
class that manages the state and adds support for rendering.
Note that unlike for Env
class, for convenience the output of the model.step()
method is a dataclass
instance and so it's components can be accessed as attributes. For example:
timestep = model.step(state, actions)
observations = timestep.observations
infos = timestep.infos
Both the Env
and POSGModel
classes support a number of additional methods, refer to the documentation for more details.
Agents API
The Agents API provides a way to easy load reference policies that come with POSGGym. Each policy is a Policy
class, which at it's simplest accepts an observation and returns the next action. The basic Agents API is shown below:
import posggym
import posggym.agents as pga
env = posggym.make("PursuitEvasion-v1", grid="16x16")
policies = {
'0': pga.make("PursuitEvasion-v1/grid=16x16/RL1_i0-v0", env.model, '0'),
'1': pga.make("PursuitEvasion-v1/grid=16x16/ShortestPath-v0", env.model, '1')
}
obs, infos = env.reset(seed=42)
for i, policy in policies.items():
policy.reset(seed=7)
for t in range(100):
actions = {i: policies[i].step(obs[i]) for i in env.agents}
obs, rewards, terminations, truncations, all_done, infos = env.step(actions)
if all_done:
obs, infos = env.reset()
for i, policy in policies.items():
policy.reset()
env.close()
for policy in policies.values():
policy.close()
For a full explanation of the agent API please see the POSGGym Agents Getting Started documentation. A full list of implemented agents is also available in the documentation.
Compatibility with PettingZoo
Any POSGGym environment can be converted into a PettingZoo ParallelEnv
environment using the posggym.wrappers.petting_zoo.PettingZoo
wrapper. This allows for easy integration with the ecosystem of libraries that support PettingZoo.
import posggym
from posggym.wrappers.petting_zoo import PettingZoo
env = posggym.make("PredatorPrey-v0")
env = PettingZoo(env)
Citation
You can cite POSGGym as:
@misc{schwartzPOSGGym2023,
title = {POSGGym},
urldate = {2023-08-08},
author = {Schwartz, Jonathon and Newbury, Rhys and Kurniawati, Hanna},
year = {2023},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file posggym-0.6.0.tar.gz
.
File metadata
- Download URL: posggym-0.6.0.tar.gz
- Upload date:
- Size: 314.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 083b0530aca2354256fd2fd327cff18f972cc8970fdaf323bbe61efe5d344e23 |
|
MD5 | 6097e24a9da1d078d705b631b639080f |
|
BLAKE2b-256 | 9034a3c343b98ea36477850ac95e9f2ae3b796ca6344b2d8ea63c66b5db8d03d |