PyTorch Reinforcement Learning Framework for Researchers
Project description
Cherry is reinforcement learning framework for researchers built on top of PyTorch.
Unlike other reinforcement learning implementations, cherry doesn't try to provide a single interface to existing algorithms. Instead, it provides you with common tools to write your own algorithms. Drawing from the UNIX philosophy, each tool strives to be as independent from the rest of the framework as possible. So if you don't like a specific tool, you can still use parts of cherry without headaches.
Installation
For now, cherry is still in development.
- Clone the repo:
git clone https://github.com/seba-1511/cherry
cd cherry
pip install -e .
Upon our first public release, you'll be able to
pip install cherry-rl
Development Guidelines
- The
master
branch is always working, considered stable. - The
dev
branch should always work and is ahead ofmaster
, considered cutting edge. - To implement a new functionality: branch
dev
intoyour_name/functionality_name
, implement your functionality, then pull request todev
. It will be periodically merged intomaster
.
Usage
The following snippet demonstrates some of the tools offered by cherry.
import cherry as ch
# Wrapping environments
env = ch.envs.Logger(env, interval=1000) # Prints rollouts statistics
env = ch.envs.Normalized(env, normalize_state=True, normalize_reward=False)
env = ch.envs.Torch(env) # Converts actions/states to tensors
# Storing and retrieving experience
replay = ch.ExperienceReplay()
replay.add(old_state, action, reward, state, done, info = {
'log_prob': mass.log_prob(action), # Can add any variable/tensor to the transitions
'value': value
})
replay.actions # Tensor of all stored actions
replay.states # Tensor of all stored states
replay.empty() # Removes all stored experience
# Discounting and normalizing rewards
rewards = ch.rewards.discount_rewards(GAMMA, replay.rewards, replay.dones)
rewards = ch.utils.normalize(th.tensor(rewards))
# Sampling rollouts per episode or samples
num_samples, num_episodes = ch.rollouts.collect(env,
get_action,
replay,
num_episodes=10,
# alternatively: num_samples=1000,
)
Concrete examples are available in the examples/ folder.
Documentation
The documentation will be written as we begin to converge the core concepts of cherry.
TODO
Some functionalities that we might want to implement.
- parallelize environments and a way to handle it with
ExperienceReplay
, VisdomLogger
as a dashboard to debug an implementation,- example with reccurent net,
- minimal but complete documentation,
- a few extensive tutorials (Getting started with distributed A2C, Advanced usage (which?) with PPO, and another on debugging your algorithms).
Acknowledgements
Cherry draws inspiration from many reinforcement learning implementations, including
- OpenAI Baselines,
- John Schulman's implementations
- Ilya Kostrikov's implementations,
- Shangtong Zhang's implementations,
- RLLab,
- RLKit.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file cherry-rl-0.0.5.1.tar.gz
.
File metadata
- Download URL: cherry-rl-0.0.5.1.tar.gz
- Upload date:
- Size: 18.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.18.4 setuptools/39.2.0 requests-toolbelt/0.8.0 tqdm/4.23.4 CPython/3.6.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6459c041d6d76d44f8ac8f373ebf64b711833faae5f0e738357f84e2e3a205ff |
|
MD5 | 7704a663c4f99e81504ac703acc36a63 |
|
BLAKE2b-256 | 72ec1165b635150cde38e26db9cb4570ddfb4716f12dd2595043aed951b61169 |