Skip to main content

A strongly typed Multi-Agent Reinforcement Learning framework

Project description

marlenv - A unified interface for muti-agent reinforcement learning

The objective of marlenv is to provide a common (typed) interface for many different reinforcement learning environments.

As such, marlenv provides high level abstractions of RL concepts such as Observations or Transitions that are commonly represented as mere (confusing) lists or tuples.

Using marlenv with existing libraries

marlenv unifies multiple popular libraries under a single interface. Namely, marlenv supports smac, gymnasium and pettingzoo.

import marlenv

# You can instanciate gymnasium environments directly via their registry ID
gym_env = marlenv.make("CartPole-v1", seed=25)

# You can seemlessly instanciate a SMAC environment and directly pass your required arguments
from marlenv.adapters import SMAC
smac_env = env2 = SMAC("3m", debug=True, difficulty="9")

# pettingzoo is also supported
from pettingzoo.sisl import pursuit_v4
from marlenv.adapters import PettingZoo
pz_env = PettingZoo(pursuit_v4.parallel_env())

Designing custom environments

You can create your own custom environment by inheriting from the RLEnv class. The below example illustrates a gridworld with a discrete action space. Note that other methods such as step or render must also be implemented.

import numpy as np
from marlenv import RLEnv, DiscreteActionSpace, Observation

N_AGENTS = 3
N_ACTIONS = 5

class CustomEnv(RLEnv[DiscreteActionSpace]):
    def __init__(self, width: int, height: int):
        super().__init__(
            action_space=DiscreteActionSpace(N_AGENTS, N_ACTIONS),
            observation_shape=(height, width),
            state_shape=(1,),
        )
        self.time = 0

    def reset(self) -> Observation:
        self.time = 0
        ...
        return obs

    def get_state(self):
        return np.array([self.time])

Useful wrappers

marlenv comes with multiple common environment wrappers, check the documentation for a complete list. The preferred way of using the wrappers is through a marlenv.Builder. The below example shows how to add a time limit (in number of steps) and an agent id to the observations of a SMAC environment.

from marlenv import Builder
from marlenv.adapters import SMAC

env = Builder(SMAC("3m")).agent_id().time_limit(20).build()
print(env.extra_feature_shape) # -> (3, ) because there are 3 agents

Related projects

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_agent_rlenv-1.1.0.tar.gz (25.9 kB view details)

Uploaded Source

Built Distribution

multi_agent_rlenv-1.1.0-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file multi_agent_rlenv-1.1.0.tar.gz.

File metadata

  • Download URL: multi_agent_rlenv-1.1.0.tar.gz
  • Upload date:
  • Size: 25.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for multi_agent_rlenv-1.1.0.tar.gz
Algorithm Hash digest
SHA256 760b87ff732dc8642dfd2d34e87c3bada00bbc3645b44447e0effbc74a87d9fc
MD5 592b59923f8e8706e1e5c6245e307a93
BLAKE2b-256 3e303372ae553746af728e63899da6acdfab6811ef285fe9235296e3c8502481

See more details on using hashes here.

Provenance

File details

Details for the file multi_agent_rlenv-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_agent_rlenv-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e3a3666cedb6c26a28cc76ef441b53db0279f65eb1312161a3304a67669b0975
MD5 7958fd49fcefddbbfae3fd885587b6da
BLAKE2b-256 f33d5c7af806dbd8be34db34c76ddc639e90ae740b73f652ffc03a69e1357457

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page