Skip to main content

A library for Deep Reinforcement Learning (PPO) in PyTorch

Project description

neroRL

neroRL is a PyTorch based research framework for Deep Reinforcement Learning specializing on Recurrent Proximal Policy Optimization. Its focus is set on environments that are procedurally generated, while providing some usefull tools for experimenting and analyzing a trained behavior.

Features

Obstacle Tower Challenge

Originally, this work started out by achieving the 7th place during the Obstacle Tower Challenge by using a relatively simple FFCNN. This video presents some footage of the approach and the trained behavior:

Rising to the Obstacle Tower Challenge

Recently we published a paper at CoG 2020 (best paper candidate) that analyzes the taken approach. Additionally the model was trained on 3 level designs and was evaluated on the two left out ones. The results can be reproduced using the obstacle-tower-challenge branch.

Getting Started

To get started check out the docs!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neroRL-0.0.4.tar.gz (74.6 kB view hashes)

Uploaded Source

Built Distribution

neroRL-0.0.4-py3-none-any.whl (109.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page