Skip to main content

A package to learn about Reinforcement Learning

Project description

PyPI - License Codacy - Quality Codacy - Coverage

LearnRL is a library to use and learn reinforcement learning. It’s also a community off supportive enthousiasts loving to share and build RL-based AI projects ! We would love to help you make projects with LearnRL, so join us on Discord !

About LearnRL

LearnRL is a tool to monitor and log reinforcement learning experiments. You build/find any compatible agent (only need an act method), you build/find a gym environment, and learnrl will make them interact together ! LearnRL also contains both tensorboard and weights&biases integrations for a beautiful and sharable experiment tracking ! Also, LearnRL is cross platform compatible ! That’s why no agents are built-in learnrl itself, but you can check: - LearnRL for Tensorflow - LearnRL for Pytorch

You can build and run your own Agent in a clear and sharable manner !

import learnrl as rl
import gym

class MyAgent(rl.Agent):

   def act(self, observation, greedy=False):
      """ How the Agent act given an observation """
      ...
      return action

   def learn(self):
      """ How the Agent learns from his experiences """
      ...
      return logs

   def remember(self, observation, action, reward, done, next_observation=None, info={}, **param):
      """ How the Agent will remember experiences """
      ...

env = gym.make('FrozenLake-v0', is_slippery=True) # This could be any gym Environment !
agent = MyAgent(env.observation_space, env.action_space)

playground = rl.Playground(env, agent)
playground.fit(2000, verbose=1)

Note that ‘learn’ and ‘remember’ are optional, so this framework can also be used for baselines !

You can logs any custom metrics that your Agent/Env gives you and even chose how to aggregate them through different timescales. See the metric codes for more details.

metrics=[
     ('reward~env-rwd', {'steps': 'sum', 'episode': 'sum'}),
     ('handled_reward~reward', {'steps': 'sum', 'episode': 'sum'}),
     'value_loss~vloss',
     'actor_loss~aloss',
     'exploration~exp'
 ]

 playground.fit(2000, verbose=1, metrics=metrics)

The Playground also allows you to add Callbacks with ease, for example the WandbCallback to have a nice experiment tracking dashboard !

Installation

Install LearnRL by running:

pip install learnrl

Get started

Create: - TODO: Numpy tutorials - TODO: Tensorflow tutorials - TODO: Pytorch tutorials

Visualize: - TODO: Tensorboard visualisation tutorial - TODO: Wandb visualisation tutorial - TODO: Wandb sweeps tutorial

Documentation

See the latest complete documentation for more details. See the development documentation to see what’s coming !

Contribute

Support

If you are having issues, please contact us on Discord.

License

The project is licensed under the GNU LGPLv3 license.
See LICENCE, COPYING and COPYING.LESSER for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

learnrl-1.0.2.tar.gz (42.7 kB view hashes)

Uploaded Source

Built Distribution

learnrl-1.0.2-py3-none-any.whl (50.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page