Cleanest Deep Reinforcement Learning Implementation Based on Web MVC
Clean deep reinforcement learning codes based on Web MVC architecture with complete unit tests
Implementing deep reinforcement learning algorithms is easy to make up messy codes because interaction loop between an environment and an agent requires a lot of dependencies among classes. Even deep learning requires special skills to build clean codes.
To think out of the box, Web engineers spent years on studying MVC (model-view-controller) architecture to build systems with tidy codes to handle interaction between Web and users. Here, I found that this MVC architecture is very useful insight even for deep reinforcement learning implementation. MVC provides a direction to an architecture with less dependencies, which would be nicer for unit testing.
You can use docker to setup and run experiments.
Once you built the container, you can start a container with nvidia runtime via
$ ./scripts/up.sh root@a84ab59aa668:/home/app# ls Dockerfile README.md example.confing.json graphs mvc scripts tests LICENSE examples logs requirements.txt test.sh tools root@a84ab59aa668:/home/app#
You need to install packages written in
requirements.txt and tensorflow.
$ pip install -r requirements.txt $ pip install tensorflow-gpu tensorflow-probability-gpu # if you run example scripts $ pip install pybullet roboschool
If you have a problem of installing tensorflow probability, check tensorflow version.
install as a library
This repository is also available on PyPI. You can implement extra algorithms built on top of mvc-drl.
$ pip install mvc
:warning: This reposiotry is under development so that interfaces might be frequently changed.
For academic usage, we provide baseline implementations that you might need to compare.
- <input type="checkbox" checked="" disabled="" /> Proximal Policy Optimization
- <input type="checkbox" checked="" disabled="" /> Deep Deterministic Policy Gradients
- <input type="checkbox" checked="" disabled="" /> Soft Actor-Critic
Each point represents an average evaluation reward of 10 episodes. Pretty much same performance has been achieved as a paper of Soft Actor-Critic.
$ python -m examples.ppo --env Ant-v2
$ python -m examples.ddpg --env Ant-v2
$ python -m examples.sac --env Ant-v2 --reward-scale 5
All logging data is saved under
logs directory as csv files and visualization tool data.
--log-adapter option in example codes to switch tensorboard and visdom as visualization (default: tensorboard).
$ tensorboard --logdir logs
To use visdom, you need to fill host information of a visdom server.
$ mv example.config.json config.json $ vim config.json # fill visdom section
Before running experiments, start the visdom server.
You can visualize with
tools/plot_csv.py by directly pointing to csv files.
$ python tools/plot_csv.py <path to csv> <path to csv> ...
By default, legends are set with paths of files.
If you want to set them manually, use
$ python tools/plot_csv.py --label=experiment1 --label=experiment2 <path to csv> <path to csv>
To gurantee code quality, all functions and classes including neural networks must have unit tests.
Following command runs all unit tests under
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size mvc-1.1.2-py3-none-any.whl (41.3 kB)||File type Wheel||Python version py3||Upload date||Hashes View|