reinforcement learning for practitioners.
Project description
Reinforcement Learning for Practitioners (v1.4.1, 20Q1)
Status: under active development, breaking changes may occur. Release notes.
EasyAgents is a high level reinforcement learning api focusing on ease of use and simplicity. Written in Python and running on top of established reinforcement learning libraries like tf-Agents, tensorforce or keras-rl. Environments are implemented in OpenAI gym. For an example of an industrial application of reinforcement learning see here.
In collaboration with Oliver Zeigermann.
Features
- provides the same, simple api across all libraries. Thus you can easily switch between different implementations and you don't have to learn for each of them a new api.
- to create and run any algorithm you only need 2 lines of code, all the parameters are named consistently across all algorithms.
- supports a broad set of different algorithms
- runs inside jupyter notebooks as well as stand-alone, easy to install requiring only a single 'pip install easyagents'.
- easy to understand, ready-made plots and logs to investigate the algorithms and environments behaviour
Note: keras-rl backend is suspended until support for tensorflow 2.0 is available.
Examples
from easyagents.agents import PpoAgent
from easyagents.callbacks import plot
ppoAgent = PpoAgent('CartPole-v0')
ppoAgent.train([plot.State(), plot.Loss(), plot.Rewards()])
More Detailed
from easyagents.agents import PpoAgent
from easyagents.callbacks import plot
ppoAgent = PpoAgent( 'Orso-v1',fc_layers=(500,500,500))
ppoAgent.train([plot.State(), plot.Loss(), plot.Rewards(), plot.Actions(),
plot.StepRewards(), plot.Steps(), plot.ToMovie()],
learning_rate = 0.0001, num_iterations = 500, max_steps_per_episode=50 )
Tutorials
- 1. Introduction (CartPole on colab): training, plotting, switching algorithms & backends. Based on the classic reinforcement learning example balancing a stick on a cart.
- 2. Next steps & backend switching (Orso on colab): custom training, creating a movie & switching backends. gym environment based on a routing problem.
- 3. Controlling training & evaluation (on colab): or 'what do all these agent.train(...) args mean ?'
- 4. Creating your own environment (LineWorld on colab): implement a gym environment from scratch, workshop example.
- 5. Saving & loading (on colab): Once a policy is trained, save it and reload it in a production environment. You may also save intermediate policies as the training proceeds.
- 6. Switching backends (on colab): See how you can switch between backend implementations.
- 7. Api logging, seeding & plot clearing (on colab): Investigate how easyagent interacts with the backend api and the gym environment; how to set seeds; controlling jupyter output cell clearing
Available Algorithms and Backends
algorithm | tf-Agents | tensorforce | keras-rl (suspended) | easyagents class name |
---|---|---|---|---|
CEM | not available |
not available |
yes |
CemAgent |
Dqn | yes |
yes |
yes |
DqnAgent |
Double Dqn | open |
not available |
yes |
DoubleDqnAgent |
Dueling Dqn | not available |
yes |
yes |
DuelingDqnAgent |
Ppo | yes |
yes |
not available |
PpoAgent |
Random | yes |
yes |
not available |
RandomAgent |
REINFORCE | yes |
yes |
not available |
ReinforceAgent |
SAC | preview |
not available |
not available |
SacAgent |
[191001]
- if you are interested in other algorithms, backends or hyperparameters let us know by creating an issue. We'll try our best to support you.
- for a documentation of the agents api see here.
- starting with easyagents 1.3 (191102) the backend for keras-rl is suspended until support for tensorflow 2.0 is available.
Industrial Application
Geberit - a sanitary technology company with > 12'000 employees - produces in particular pipes and other parts to get rain-water of flat roofs - so called syphonic roof drainage systems. They warrant that large buildings like stadiums, airports or shopping malls do not collapse during heavy rainfalls. However it is surprisingly difficult to find the right dimensions for the pipes. It is actually so difficult, that as of today no feasable, deterministic algorithm is known. Thus traditional heuristics and classic machine learning were used to support the users in finding a suitable solution.
Using reinforcement learning the failrate of the previous solution was reduced by 70%, resulting in an end-to-end success-rate of > 98%.
For more details take a look at this talk.
Installation
Install from pypi using pip:
pip install easyagents
More
Documentation
for release notes & class diagram, for agents & api.
Guiding Principles
- easily train, evaluate & debug policies for (you own) gym environment over "designing new algorithms"
- simple & consistent over "flexible & powerful"
- inspired by keras:
- same api across all algorithms
- support different implementations of the same algorithm
- extensible (pluggable backends, plots & training schemes)
EasyAgents may not be ideal if
- you would like to leverage implementation specific advantages of an algorithm
- you want to do distributed or in parallel reinforcement learning
Note
- If you have any difficulties in installing or using easyagents please let us know by creating an issue. We'll try our best to help you.
- Any ideas, help, suggestions, comments etc in python / open source development / reinforcement learning / whatever are more than welcome.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file easyagents-1.4.1.zip
.
File metadata
- Download URL: easyagents-1.4.1.zip
- Upload date:
- Size: 71.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.1.0 requests-toolbelt/0.9.1 tqdm/4.42.0 CPython/3.7.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9f2455923fa9149c82cb15543f25d64d6390ce03bb2a4f6485fc29031748c815 |
|
MD5 | 3c1f2ffa2c64707e4658138f208ba7aa |
|
BLAKE2b-256 | 014450467b10538f65d566e8efe36103bffc4c313bbc8f89ece5a4714ccc446a |