MazeRL is a development framework for building applied reinforcement learning systems, addressing real-world decision problems. It supports the complete development life cycle of RL applications, ranging from simulation engineering up to agent development, training and deployment.
Project description
Applied Reinforcement Learning with Python
MazeRL is an application oriented Deep Reinforcement Learning (RL) framework, addressing real-world decision problems. Our vision is to cover the complete development life cycle of RL applications ranging from simulation engineering up to agent development, training and deployment.
This is a preliminary, non-stable release of Maze. It is not yet complete and not all of our interfaces have settled yet. Hence, there might be some breaking changes on our way towards the first stable release.
Spotlight Features
Below we list a few selected Maze features.
- Design and visualize your policy and value networks with the Perception Module. It is based on PyTorch and provides a large variety of neural network building blocks and model styles. Quickly compose powerful representation learners from building blocks such as: dense, convolution, graph convolution and attention, recurrent architectures, action- and observation masking, self-attention etc.
- Create the conditions for efficient RL training without writing boiler plate code, e.g. by supporting best practices like pre-processing and normalizing your observations.
- Maze supports advanced environment structures reflecting the requirements of real-world industrial decision problems such as multi-step and multi-agent scenarios. You can of course work with existing Gym-compatible environments.
- Use the provided Maze trainers (A2C, PPO, Impala, SAC, Evolution Strategies), which are supporting dictionary action and observation spaces as well as multi-step (auto-regressive policies) training. Or stick to your favorite tools and trainers by combining Maze with other RL frameworks.
- Out of the box support for advanced training workflows such as imitation learning from teacher policies and policy fine-tuning.
- Keep even complex application and experiment configuration manageable with the Hydra Config System.
Get Started
-
Make sure PyTorch is installed and then get the latest released version of Maze as follows
pip install -U maze-rl # optionally install RLLib if you want to use it in combination with Maze pip install ray[rllib] tensorflow
Read more about other options like the installation of the latest development version.
:zap: We encourage you to start with Python 3.7, as many popular environments like Atari or Box2D can not easily be installed in newer Python environments. Maze itself supports newer Python versions, but for Python 3.9 you might have to install additional binary dependencies manually
-
To see Maze in action check out a first example.
-
Try your own Gym env or visit our Maze step-by-step tutorial.
Installation |
First Example |
Step by Step Tutorial |
Documentation |
- Clone this project template repo to start your own Maze project.
Learn more about Maze
The documentation is the starting point to learn more about the underlying concepts, but most importantly also provides code snippets and minimum working examples to get you started quickly.
-
The Workflow section guides you through typical tasks in a RL project
-
Policy and Value Networks introduces you to the Perception Module, how to customize action spaces and the underlying action probability distributions and two styles of policy and value networks construction:
-
Template models are composed directly from an environment's observation and action space, allowing you to train with suitable agent networks on a new environment within minutes.
-
Custom models gives you the full flexibility of application specific models, either with the provided Maze building blocks or directly with PyTorch.
-
-
Learn more about core concepts and structures such as the Maze environment hierarchy, the Maze event system providing a convenient way to collect statistics and KPIs, enable flexible reward formulation and supporting offline analysis.
-
Structured Environments and Action Masking introduces you to a general concept, which can greatly improve the performance of the trained agents in practical RL problems.
License
Maze is freely available for research and non-commercial use. A commercial license is available, if interested please contact us on our company website or write us an email.
We believe in Open Source principles and aim at transitioning Maze to a commercial Open Source project, releasing larger parts of the framework under a permissive license in the near future.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file maze_rl-0.1.3.dev5370-py3-none-any.whl
.
File metadata
- Download URL: maze_rl-0.1.3.dev5370-py3-none-any.whl
- Upload date:
- Size: 537.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bbcd614be0114c661a476440744dcaab91a8c5ffe82df219019825ec71c0febd |
|
MD5 | 19132892ec99c6bc64ee9e99f0d0da54 |
|
BLAKE2b-256 | 754db1865c889a1d41148f3a498d97bd54ec06d8d5712b8f9b0f936cff8dc598 |