Skip to main content
Help us improve Python packaging – donate today!

Framework for deep reinforcement learning

Project Description

Travis Python27 Python35 PyPi License

DeeR

DeeR is a python library for Deep Reinforcement. It is build with modularity in mind so that it can easily be adapted to any need. It provides many possibilities out of the box (prioritized experience replay, double Q-learning, DDPG, etc). Many different environment examples are also provided (some of them using OpenAI gym).

Dependencies

This framework is tested to work under Python 2.7, and Python 3.5. It should also work with Python 3.3 and 3.4.

The required dependencies are NumPy >= 1.10, joblib >= 0.9. You also need theano >= 0.8 or tensorflow >= 0.9 along with the keras library.

For running the examples, Matplotlib >= 1.1.1 is required. For running the atari games environment, you need to install ALE >= 0.4.

Full Documentation

The documentation is available at : http://deer.readthedocs.io/

Here are a few examples :

http://vincent.francois-l.be/img_GeneralDeepQRL/seaquest.gif http://vincent.francois-l.be/img_GeneralDeepQRL/output7.gif

Release history Release notifications

This version
History Node

0.3.1

History Node

0.3

History Node

0.3.dev1

History Node

0.2.4

History Node

0.2

History Node

0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
deer-0.3.1-py2-none-any.whl (166.7 kB) Copy SHA256 hash SHA256 Wheel py2 Jun 26, 2017
deer-0.3.1.tar.gz (64.9 kB) Copy SHA256 hash SHA256 Source None Jun 26, 2017

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page