Skip to main content

A strongly typed Multi-Agent Reinforcement Learning framework

Project description

marlenv - A unified interface for muti-agent reinforcement learning

The objective of marlenv is to provide a common (typed) interface for many different reinforcement learning environments.

As such, marlenv provides high level abstractions of RL concepts such as Observations or Transitions that are commonly represented as mere (confusing) lists or tuples.

Using marlenv with existing libraries

marlenv unifies multiple popular libraries under a single interface. Namely, marlenv supports smac, gymnasium and pettingzoo.

import marlenv

# You can instanciate gymnasium environments directly via their registry ID
gym_env = marlenv.make("CartPole-v1", seed=25)

# You can seemlessly instanciate a SMAC environment and directly pass your required arguments
from marlenv.adapters import SMAC
smac_env = env2 = SMAC("3m", debug=True, difficulty="9")

# pettingzoo is also supported
from pettingzoo.sisl import pursuit_v4
from marlenv.adapters import PettingZoo
pz_env = PettingZoo(pursuit_v4.parallel_env())

Designing custom environments

You can create your own custom environment by inheriting from the RLEnv class. The below example illustrates a gridworld with a discrete action space. Note that other methods such as step or render must also be implemented.

import numpy as np
from marlenv import RLEnv, DiscreteActionSpace, Observation

N_AGENTS = 3
N_ACTIONS = 5

class CustomEnv(RLEnv[DiscreteActionSpace]):
    def __init__(self, width: int, height: int):
        super().__init__(
            action_space=DiscreteActionSpace(N_AGENTS, N_ACTIONS),
            observation_shape=(height, width),
            state_shape=(1,),
        )
        self.time = 0

    def reset(self) -> Observation:
        self.time = 0
        ...
        return obs

    def get_state(self):
        return np.array([self.time])

Useful wrappers

marlenv comes with multiple common environment wrappers, check the documentation for a complete list. The preferred way of using the wrappers is through a marlenv.Builder. The below example shows how to add a time limit (in number of steps) and an agent id to the observations of a SMAC environment.

from marlenv import Builder
from marlenv.adapters import SMAC

env = Builder(SMAC("3m")).agent_id().time_limit(20).build()
print(env.extra_feature_shape) # -> (3, ) because there are 3 agents

Related projects

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_agent_rlenv-1.1.1.tar.gz (25.9 kB view details)

Uploaded Source

Built Distribution

multi_agent_rlenv-1.1.1-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file multi_agent_rlenv-1.1.1.tar.gz.

File metadata

  • Download URL: multi_agent_rlenv-1.1.1.tar.gz
  • Upload date:
  • Size: 25.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for multi_agent_rlenv-1.1.1.tar.gz
Algorithm Hash digest
SHA256 4ca25965824a1daf88e789b92037a6e3d3ff5deb33f54de7c0f0d9361752c4de
MD5 62d83c6d47c7efeea8baaeb521ee0cd8
BLAKE2b-256 e793b03f40e0afdcd31601ddf0808cfd58ab3ce3808df75bc0a4ffb2fdcc3533

See more details on using hashes here.

Provenance

File details

Details for the file multi_agent_rlenv-1.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_agent_rlenv-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 48dd6386a050fcc4da5f5b7738c0a9e3bd58e8de14c855ca54d94cce71957301
MD5 bb331503bd4fbc2a86f09fda5bb4418b
BLAKE2b-256 fff0753c1c157e7a8d08f79cd3e28501f23a66d33658e1d406e2a9fa88616e57

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page