Skip to main content

A package to learn about Reinforcement Learning

Project description

PyPI - License Codacy - Quality Codacy - Coverage

LearnRL is a library to use and learn reinforcement learning. It’s also a community off supportive enthousiasts loving to share and build RL-based AI projects ! We would love to help you make projects with LearnRL, so join us on Discord !

About LearnRL

LearnRL is a tool to monitor and log reinforcement learning experiments. You build/find any compatible agent (only need an act method), you build/find a gym environment, and learnrl will make them interact together ! LearnRL also contains both tensorboard and weights&biases integrations for a beautiful and sharable experiment tracking ! Also, LearnRL is cross platform compatible ! That’s why no agents are built-in learnrl itself, but you can check: - LearnRL for Tensorflow - LearnRL for Pytorch

You can build and run your own Agent in a clear and sharable manner !

import learnrl as rl
import gym

class MyAgent(rl.Agent):

   def act(self, observation, greedy=False):
      """ How the Agent act given an observation """
      ...
      return action

   def learn(self):
      """ How the Agent learns from his experiences """
      ...
      return logs

   def remember(self, observation, action, reward, done, next_observation=None, info={}, **param):
      """ How the Agent will remember experiences """
      ...

env = gym.make('FrozenLake-v0', is_slippery=True) # This could be any gym Environment !
agent = MyAgent(env.observation_space, env.action_space)

playground = rl.Playground(env, agent)
playground.fit(2000, verbose=1)

Note that ‘learn’ and ‘remember’ are optional, so this framework can also be used for baselines !

You can logs any custom metrics that your Agent/Env gives you and even chose how to aggregate them through different timescales. See the metric codes for more details.

metrics=[
     ('reward~env-rwd', {'steps': 'sum', 'episode': 'sum'}),
     ('handled_reward~reward', {'steps': 'sum', 'episode': 'sum'}),
     'value_loss~vloss',
     'actor_loss~aloss',
     'exploration~exp'
 ]

 playground.fit(2000, verbose=1, metrics=metrics)

The Playground also allows you to add Callbacks with ease, for example the WandbCallback to have a nice experiment tracking dashboard !

Installation

Install LearnRL by running:

pip install learnrl

Get started

Create: - TODO: Numpy tutorials - TODO: Tensorflow tutorials - TODO: Pytorch tutorials

Visualize: - TODO: Tensorboard visualisation tutorial - TODO: Wandb visualisation tutorial - TODO: Wandb sweeps tutorial

Documentation

See the latest complete documentation for more details. See the development documentation to see what’s coming !

Contribute

Support

If you are having issues, please contact us on Discord.

License

The project is licensed under the GNU LGPLv3 license.
See LICENCE, COPYING and COPYING.LESSER for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

learnrl-1.0.2.tar.gz (42.7 kB view details)

Uploaded Source

Built Distribution

learnrl-1.0.2-py3-none-any.whl (50.5 kB view details)

Uploaded Python 3

File details

Details for the file learnrl-1.0.2.tar.gz.

File metadata

  • Download URL: learnrl-1.0.2.tar.gz
  • Upload date:
  • Size: 42.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for learnrl-1.0.2.tar.gz
Algorithm Hash digest
SHA256 38747ae17d00206774111cbbcfcc49de7faac573d531ed7ed19d3af4ea0c4150
MD5 9b7020c98f1617c6fad7fafddc874273
BLAKE2b-256 f1886d6fb038258862ba387c93ff798fca773d8e2fbe0335e59bf344a36db41f

See more details on using hashes here.

File details

Details for the file learnrl-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: learnrl-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 50.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for learnrl-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 455135d7a34dd23a468d3ff6139897968653ca8b2fd214a648fd58d2c1237619
MD5 7dd164babfbd04653ff00943e41a70bc
BLAKE2b-256 f879bf7c3543604cad091b38c49fb0d1fce56ea1ecdde80bee572a586e9a18cc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page