Skip to main content

A General RL framerwork in a swarm environment

Project description

GenRL: Building Flexible, Decentralized Multi-Agent RL Environments

🌐 Visit Website    🧠 RL Swarm    ✖️ X    🤗 Hugging Face    💬 Discord

🔬 Research    📰 Latest News    💼 Work for Gensyn    📊 Dashboard

GenRL is a framework that provides native support for horizontally scalable, multi-agent, multi-stage RL with decentralized coordination and communication.

Customizable Components:

  • DataManager: Specifies and manages the particular data your RL environment will use. This could be a text dataset, an image dataset, a chessboard, or something else entirely.
  • RewardManager: This is where you implement your custom reward functions, directly shaping the RL objective for your agents.
  • Trainer: Performs two functions
    • Train: Manages the core learning process itself, this is where policy updates happen. Whether you're working with policy gradient optimization, value function approximation, or other RL paradigms, the algorithmic policy updates take place here.
    • Generation: Handles the generation of rollouts and agent interactions within the environment.

Core Components

  • GameManager: Seamlessly coordinates the data flow between the core components you define and the other agents in the multi-agent swarm.
  • CommunicationManager: Handles the communication between the agents in the swarm. Current backends include
    • HiveMind: A decentralized communication protocol that allows agents to communicate with each other.
    • Torch Distributed: A distributed training protocol that allows agents to train with each other.

Optional Components

  • Coordination: Handles coordination and orchestration between agents in a decentralized swarm. This is implemented using smart contracts on the blockchain and is only required when running in a decentralized swarm.

Framework Defined Progression

We track the progression of the game on a per-round basis. Each round the data manager initializes round data. The round data kicks off the game’s stages, for each stage rollouts are generated, appended to the game state, and communicated to the swarm. After the agent has progressed through the game’s predefined stages, rewards are evaluated and policies are updated. The user has full control over the update, which occurs in the Trainer.train method, and so has the opportunity to update the policy on a per stage or per round basis. orchestrated data flow through the framework

Example Usage

pip install .[examples]
export NUM_PROC_PER_NODE=1
export NUM_NODES=1
export MASTER_ADDR="localhost"
export MASTER_PORT=29500
./scripts/train.sh $NUM_NODES $NUM_PROC_PER_NODE multistage_math msm_dapodata_grpo.yaml

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gensyn_genrl-0.1.8-py3-none-any.whl (99.9 kB view details)

Uploaded Python 3

File details

Details for the file gensyn_genrl-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: gensyn_genrl-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 99.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for gensyn_genrl-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 d1649629113aedf267717e0fa2ef1845b97eef005a7b5ad0f8bc587d9de21cb6
MD5 f4374fd8cf62c1d14adbf09c3ebe2e59
BLAKE2b-256 7b8b931271aec172cfc38793490bba64d02e5de01487f3a73ba323ae6bfd6fb5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page