Agent Based Simulation and MultiAgent Reinforcement Learning
Project description
Abmarl
Abmarl is a package for developing Agent-Based Simulations and training them with MultiAgent Reinforcement Learning (MARL). We provide an intuitive command line interface for engaging with the full workflow of MARL experimentation: training, visualizing, and analyzing agent behavior. We define an Agent-Based Simulation Interface and Simulation Manager, which control which agents interact with the simulation at each step. We support integration with popular reinforcement learning simulation interfaces, including gym.Env, MultiAgentEnv, and OpenSpiel. We define our own GridWorld Simulation Framework for creating custom grid-based Agent Based Simulations.
Abmarl leverages RLlib’s framework for reinforcement learning and extends it to more easily support custom simulations, algorithms, and policies. We enable researchers to rapidly prototype MARL experiments and simulation design and lower the barrier for pre-existing projects to prototype RL as a potential solution.
Quickstart
To use Abmarl, install via pip: pip install abmarl
To develop Abmarl, clone the repository and install via pip's development mode.
Note: Abmarl requires python3.7
or python3.8
.
git clone git@github.com:LLNL/Abmarl.git
cd abmarl
pip install -r requirements.txt
pip install -e . --no-deps
Train agents in a multicorridor simulation:
abmarl train examples/multi_corridor_example.py
Visualize trained behavior:
abmarl visualize ~/abmarl_results/MultiCorridor-2020-08-25_09-30/ -n 5 --record
Note: If you install with conda,
then you must also include ffmpeg
in your
virtual environment.
Documentation
You can find the latest Abmarl documentation on our ReadTheDocs page.
Community
Citation
Abmarl has been published to the Journal of Open Source Software (JOSS). It can be cited using the following bibtex entry:
@article{Rusu2021,
doi = {10.21105/joss.03424},
url = {https://doi.org/10.21105/joss.03424},
year = {2021},
publisher = {The Open Journal},
volume = {6},
number = {64},
pages = {3424},
author = {Edward Rusu and Ruben Glatt},
title = {Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning},
journal = {Journal of Open Source Software}
}
Reporting Issues
Please use our issue tracker to report any bugs or submit feature requests. Great bug reports tend to have:
- A quick summary and/or background
- Steps to reproduce, sample code is best.
- What you expected would happen
- What actually happens
Contributing
Please submit contributions via pull requests from a forked repository. Find out more about this process here. All contributions are under the BSD 3 License that covers the project.
Release
LLNL-CODE-815883
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file abmarl-0.2.5.tar.gz
.
File metadata
- Download URL: abmarl-0.2.5.tar.gz
- Upload date:
- Size: 81.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 30374355f29ddc382f8ba8f96b0847350265239ae4f54bb83606eeef0349d54b |
|
MD5 | 49626f966e97bb06affb246bd3e2d775 |
|
BLAKE2b-256 | db967596c68d2132f5f98d61053fc012e9a0d2be1c07c2712345b96c42445d65 |
File details
Details for the file abmarl-0.2.5-py3-none-any.whl
.
File metadata
- Download URL: abmarl-0.2.5-py3-none-any.whl
- Upload date:
- Size: 120.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ba63685b45fcef26eb9c05386432ced83ebab21edc6a3717940704e5efa88767 |
|
MD5 | 463ed455e568f039c7e9f263f60a2d80 |
|
BLAKE2b-256 | 855f4d3137041f3be07887b71d7ff62e13bf323c1760ba19ec3cc817fa0c578f |