A strongly typed Multi-Agent Reinforcement Learning framework
Project description
marlenv
- A unified interface for muti-agent reinforcement learning
The objective of marlenv
is to provide a common (typed) interface for many different reinforcement learning environments.
As such, marlenv
provides high level abstractions of RL concepts such as Observation
s or Transition
s that are commonly represented as mere (confusing) lists or tuples.
Using marlenv
with existing libraries
marlenv
unifies multiple popular libraries under a single interface. Namely, marlenv
supports smac
, gymnasium
and pettingzoo
.
import marlenv
# You can instanciate gymnasium environments directly via their registry ID
gym_env = marlenv.make("CartPole-v1", seed=25)
# You can seemlessly instanciate a SMAC environment and directly pass your required arguments
from marlenv.adapters import SMAC
smac_env = env2 = SMAC("3m", debug=True, difficulty="9")
# pettingzoo is also supported
from pettingzoo.sisl import pursuit_v4
from marlenv.adapters import PettingZoo
pz_env = PettingZoo(pursuit_v4.parallel_env())
Designing custom environments
You can create your own custom environment by inheriting from the RLEnv
class. The below example illustrates a gridworld with a discrete action space. Note that other methods such as step
or render
must also be implemented.
import numpy as np
from marlenv import RLEnv, DiscreteActionSpace, Observation
N_AGENTS = 3
N_ACTIONS = 5
class CustomEnv(RLEnv[DiscreteActionSpace]):
def __init__(self, width: int, height: int):
super().__init__(
action_space=DiscreteActionSpace(N_AGENTS, N_ACTIONS),
observation_shape=(height, width),
state_shape=(1,),
)
self.time = 0
def reset(self) -> Observation:
self.time = 0
...
return obs
def get_state(self):
return np.array([self.time])
Useful wrappers
marlenv
comes with multiple common environment wrappers, check the documentation for a complete list. The preferred way of using the wrappers is through a marlenv.Builder
. The below example shows how to add a time limit (in number of steps) and an agent id to the observations of a SMAC environment.
from marlenv import Builder
from marlenv.adapters import SMAC
env = Builder(SMAC("3m")).agent_id().time_limit(20).build()
print(env.extra_feature_shape) # -> (3, ) because there are 3 agents
Related projects
- MARL: Collection of multi-agent reinforcement learning algorithms based on
marlenv
https://github.com/yamoling/marl - Laser Learning Environment: a multi-agent gridworld that leverages
marlenv
's capabilities https://pypi.org/project/laser-learning-environment/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file multi_agent_rlenv-1.0.2.tar.gz
.
File metadata
- Download URL: multi_agent_rlenv-1.0.2.tar.gz
- Upload date:
- Size: 24.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f13739833bd2f2d7f2390afc440ca468d75fcaba94121f61cf1c898a55f8fe9d |
|
MD5 | 8693fd5c6e555be3f948b528bd6e30db |
|
BLAKE2b-256 | f1e571bb98aa0ae6d3f4b2f083461d061beef84637b19b7f773636b9ca8ce596 |
Provenance
File details
Details for the file multi_agent_rlenv-1.0.2-py3-none-any.whl
.
File metadata
- Download URL: multi_agent_rlenv-1.0.2-py3-none-any.whl
- Upload date:
- Size: 28.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b30af17f0c45d71dbc3313e661b91f3ac609d1bd893d7e6cf6b101847cb49ace |
|
MD5 | c3ea1a02acb51544e5bbec59893cdb62 |
|
BLAKE2b-256 | 2d673f382feddbc41648ac72bc89fd113404c20f6443fe6b42ac0e2fc7e008f0 |