Skip to main content

A strongly typed Multi-Agent Reinforcement Learning framework

Project description

marlenv - A unified interface for muti-agent reinforcement learning

The objective of marlenv is to provide a common (typed) interface for many different reinforcement learning environments.

As such, marlenv provides high level abstractions of RL concepts such as Observations or Transitions that are commonly represented as mere (confusing) lists or tuples.

Using marlenv with existing libraries

marlenv unifies multiple popular libraries under a single interface. Namely, marlenv supports smac, gymnasium and pettingzoo.

import marlenv

# You can instanciate gymnasium environments directly via their registry ID
gym_env = marlenv.make("CartPole-v1", seed=25)

# You can seemlessly instanciate a SMAC environment and directly pass your required arguments
from marlenv.adapters import SMAC
smac_env = env2 = SMAC("3m", debug=True, difficulty="9")

# pettingzoo is also supported
from pettingzoo.sisl import pursuit_v4
from marlenv.adapters import PettingZoo
pz_env = PettingZoo(pursuit_v4.parallel_env())

Designing custom environments

You can create your own custom environment by inheriting from the RLEnv class. The below example illustrates a gridworld with a discrete action space. Note that other methods such as step or render must also be implemented.

import numpy as np
from marlenv import RLEnv, DiscreteActionSpace, Observation

N_AGENTS = 3
N_ACTIONS = 5

class CustomEnv(RLEnv[DiscreteActionSpace]):
    def __init__(self, width: int, height: int):
        super().__init__(
            action_space=DiscreteActionSpace(N_AGENTS, N_ACTIONS),
            observation_shape=(height, width),
            state_shape=(1,),
        )
        self.time = 0

    def reset(self) -> Observation:
        self.time = 0
        ...
        return obs

    def get_state(self):
        return np.array([self.time])

Useful wrappers

marlenv comes with multiple common environment wrappers, check the documentation for a complete list. The preferred way of using the wrappers is through a marlenv.Builder. The below example shows how to add a time limit (in number of steps) and an agent id to the observations of a SMAC environment.

from marlenv import Builder
from marlenv.adapters import SMAC

env = Builder(SMAC("3m")).agent_id().time_limit(20).build()
print(env.extra_feature_shape) # -> (3, ) because there are 3 agents

Related projects

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_agent_rlenv-1.2.3.tar.gz (26.0 kB view details)

Uploaded Source

Built Distribution

multi_agent_rlenv-1.2.3-py3-none-any.whl (30.6 kB view details)

Uploaded Python 3

File details

Details for the file multi_agent_rlenv-1.2.3.tar.gz.

File metadata

  • Download URL: multi_agent_rlenv-1.2.3.tar.gz
  • Upload date:
  • Size: 26.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for multi_agent_rlenv-1.2.3.tar.gz
Algorithm Hash digest
SHA256 5d12344a9b5974f41504d173479c1ff31007adea460721323a4942c5ce1ee9c5
MD5 789b316420e7b91e1e6d00a7a7229864
BLAKE2b-256 aa9c3ede06d8e89c15aa4c93dfaaa90f971af19a3961c13be45077823ecfae73

See more details on using hashes here.

Provenance

The following attestation bundles were made for multi_agent_rlenv-1.2.3.tar.gz:

Publisher: GitHub
  • Repository: yamoling/multi-agent-rlenv
  • Workflow: ci.yaml
Attestations:
  • Statement type: https://in-toto.io/Statement/v1
    • Predicate type: https://docs.pypi.org/attestations/publish/v1
    • Subject name: multi_agent_rlenv-1.2.3.tar.gz
    • Subject digest: 5d12344a9b5974f41504d173479c1ff31007adea460721323a4942c5ce1ee9c5
    • Transparency log index: 147316251
    • Transparency log integration time:

File details

Details for the file multi_agent_rlenv-1.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_agent_rlenv-1.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 89d68b0500158427e96cecb10509cec3a8eca0e1a967cea65888705e565b6934
MD5 cc9091ded6e525e489a042fa83d78133
BLAKE2b-256 f483fbcfbcb81bfeaf2bd7a06739bea5f7b81982ee04aac98863a4cf06fd3d90

See more details on using hashes here.

Provenance

The following attestation bundles were made for multi_agent_rlenv-1.2.3-py3-none-any.whl:

Publisher: GitHub
  • Repository: yamoling/multi-agent-rlenv
  • Workflow: ci.yaml
Attestations:
  • Statement type: https://in-toto.io/Statement/v1
    • Predicate type: https://docs.pypi.org/attestations/publish/v1
    • Subject name: multi_agent_rlenv-1.2.3-py3-none-any.whl
    • Subject digest: 89d68b0500158427e96cecb10509cec3a8eca0e1a967cea65888705e565b6934
    • Transparency log index: 147316254
    • Transparency log integration time:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page