A Framework for Reinforcement Learning in Games
Project description
Build (DW)
go to basedir/open_spiel/build/
cmake ../open_spiel/
make
set the paths to open_spiel dirs in bash_profile as in doc:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/projects/open_spiel/build
export PYTHONPATH=$PYTHONPATH:~/projects/open_spiel/
export PYTHONPATH=$PYTHONPATH:~/projects/open_spiel/build/python
OpenSpiel: A Framework for Reinforcement Learning in Games
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, strictly turn-taking and simultaneous-move, perfect and imperfect information games, as well as traditional multiagent environments such as (partially- and fully- observable) grid worlds and social dilemmas. OpenSpiel also includes tools to analyze learning dynamics and other common evaluation metrics. Games are represented as procedural extensive-form games, with some natural extensions. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python.
To try OpenSpiel in Google Colaboratory, please refer to open_spiel/colabs
subdirectory or start here.
Index
Please choose among the following options:
- Installing OpenSpiel
- Introduction to OpenSpiel
- API Overview and First Example
- Overview of Implemented Games
- Overview of Implemented Algorithms
- Developer Guide
- Using OpenSpiel as a C++ Library
- Guidelines and Contributing
- Authors
For a longer introduction to the core concepts, formalisms, and terminology, including an overview of the algorithms and some results, please see OpenSpiel: A Framework for Reinforcement Learning in Games.
For an overview of OpenSpiel and example uses of the core API, please check out our tutorials:
- Motivation, Core API, Brief Intro to Replictor Dynamics and Imperfect Information Games by Marc Lanctot. (slides) (colab)
- Motivation, Core API, Implementing CFR and REINFORCE on Kuhn poker, Leduc poker, and Goofspiel by Edward Lockhart. (slides) (colab)
If you use OpenSpiel in your research, please cite the paper using the following BibTeX:
@article{LanctotEtAl2019OpenSpiel,
title = {{OpenSpiel}: A Framework for Reinforcement Learning in Games},
author = {Marc Lanctot and Edward Lockhart and Jean-Baptiste Lespiau and
Vinicius Zambaldi and Satyaki Upadhyay and Julien P\'{e}rolat and
Sriram Srinivasan and Finbarr Timbers and Karl Tuyls and
Shayegan Omidshafiei and Daniel Hennes and Dustin Morrill and
Paul Muller and Timo Ewalds and Ryan Faulkner and J\'{a}nos Kram\'{a}r
and Bart De Vylder and Brennan Saeta and James Bradbury and David Ding
and Sebastian Borgeaud and Matthew Lai and Julian Schrittwieser and
Thomas Anthony and Edward Hughes and Ivo Danihelka and Jonah Ryan-Davis},
year = {2019},
eprint = {1908.09453},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
journal = {CoRR},
volume = {abs/1908.09453},
url = {http://arxiv.org/abs/1908.09453},
}
Versioning
We use Semantic Versioning.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file open_spiel_custom-2.0.1.tar.gz
.
File metadata
- Download URL: open_spiel_custom-2.0.1.tar.gz
- Upload date:
- Size: 3.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f6fa1acc46ba83941e37867d0894fb414675dee07e5e1b5e32f52ce59b9e4483 |
|
MD5 | dd7723c7fa53bec11a53a65c8836d595 |
|
BLAKE2b-256 | bc27e96f3a73f590fb6fb4653c51b1114de3479ad1c4c87ca98ce32b6e3d58fa |