Skip to main content

A library for Reinforcement Learning

Project description

# anyrl-py

This is a Python remake (and makeover) of [anyrl](https://github.com/unixpickle/anyrl). It is a general-purpose library for Reinforcement Learning which aims to be as modular as possible.

# Installation

You can install anyrl with pip:

` pip install anyrl `

# APIs

There are several different sub-modules in anyrl:

  • models: abstractions and concrete implementations of RL models. This includes actor-critic RNNs, MLPs, CNNs, etc. Takes care of sequence padding, BPTT, etc.

  • envs: APIs for dealing with environments, including wrappers and asynchronous environments.

  • rollouts: APIs for gathering and manipulating batches of episodes or partial episodes. Many RL algorithms include a “gather trajectories” step, and this sub-module fulfills that role.

  • algos: well-known learning algorithms like policy gradients or PPO. Also includes mini-algorithms like Generalized Advantage Estimation.

  • spaces: tools for using action and observation spaces. Includes parameterized probability distributions for implementing stochastic policies.

# Motivation

Currently, most RL code out there is very restricted and not properly decoupled. In contrast, anyrl aims to be extremely modular and flexible. The goal is to decouple agents, learning algorithms, trajectories, and things like GAE.

For example, anyrl decouples rollouts from the learning algorithm (when possible). This way, you can gather rollouts in several different ways and still feed the results into one learning algorithm. Further, and more obviously, you don’t have to rewrite rollout code for every new RL algorithm you implement. However, algorithms like A3C and Evolution Strategies may have specific ways of performing rollouts that can’t rely on the rollout API.

# Use of TensorFlow

This project relies on TensorFlow for models and training algorithms. However, anyrl APIs are framework-agnostic when possible. For example, the rollout API can be used with any policy, whether it’s a TensorFlow neural network or a native-Python decision forest.

# TODO

Here is the current TODO list, organized by sub-module:

  • models * Unify CNN and MLP models with a single base class. * Unshared actor-critics for TRPO and the like.

  • rollouts * Maybe: way to not record states in model_outs (memory saving) * Normalization based on advantage magnitudes.

  • algos * TRPO * PPO: allow clipping for value function

  • spaces * Dict

  • tests * Benchmarks for rollouts * Benchmarks for training

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anyrl-0.11.3.tar.gz (40.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

anyrl-0.11.3-py3-none-any.whl (58.6 kB view details)

Uploaded Python 3

File details

Details for the file anyrl-0.11.3.tar.gz.

File metadata

  • Download URL: anyrl-0.11.3.tar.gz
  • Upload date:
  • Size: 40.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for anyrl-0.11.3.tar.gz
Algorithm Hash digest
SHA256 675eb91b5d91964e3d51d3a7820c3d8027146895a0ef13743f95634fbbc73f5c
MD5 7a5f694a699f649706ca5bd257dacc1b
BLAKE2b-256 1c9b7269bdee41eae4dfee8286326722db2ffefa51c7ba91309224f0456307d0

See more details on using hashes here.

File details

Details for the file anyrl-0.11.3-py3-none-any.whl.

File metadata

File hashes

Hashes for anyrl-0.11.3-py3-none-any.whl
Algorithm Hash digest
SHA256 ce33cc040da632b57ff428e654f9c5e7ff3522ef443b8e042b1e3c7af142d3f5
MD5 137fa60d18a38d0b6a94a03cfb66660b
BLAKE2b-256 3f12c32e187e4c2d5bb307e0458292bded6e37e363d8979451ce47c740fd6035

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page