Skip to main content

AgileRL is a deep reinforcement learning library focused on improving RL development through RLOps.

Project description

AgileRL

Reinforcement learning streamlined.
Easier and faster reinforcement learning with RLOps. Visit our website. View documentation.
Join the Discord Server for questions, help and collaboration.

License Documentation Status Downloads Discord Arena

AgileRL 2.0 is here! Check out the latest powerful updates

🚀 Train super-fast for free on Arena, the RLOps platform from AgileRL 🚀


AgileRL is a Deep Reinforcement Learning library focused on improving development by introducing RLOps - MLOps for reinforcement learning.

This library is initially focused on reducing the time taken for training models and hyperparameter optimization (HPO) by pioneering evolutionary HPO techniques for reinforcement learning.
Evolutionary HPO has been shown to drastically reduce overall training times by automatically converging on optimal hyperparameters, without requiring numerous training runs.
We are constantly adding more algorithms and features. AgileRL already includes state-of-the-art evolvable on-policy, off-policy, offline, multi-agent and contextual multi-armed bandit reinforcement learning algorithms with distributed training.

AgileRL offers 10x faster hyperparameter optimization than SOTA.

Table of Contents

Get Started

To see the full AgileRL documentation, including tutorials, visit our documentation site. To ask questions and get help, collaborate, or discuss anything related to reinforcement learning, join the AgileRL Discord Server.

Install as a package with pip:

pip install agilerl

Or install in development mode:

git clone https://github.com/AgileRL/AgileRL.git && cd AgileRL
pip install -e .

Benchmarks

Reinforcement learning algorithms and libraries are usually benchmarked once the optimal hyperparameters for training are known, but it often takes hundreds or thousands of experiments to discover these. This is unrealistic and does not reflect the true, total time taken for training. What if we could remove the need to conduct all these prior experiments?

In the charts below, a single AgileRL run, which automatically tunes hyperparameters, is benchmarked against Optuna's multiple training runs traditionally required for hyperparameter optimization, demonstrating the real time savings possible. Global steps is the sum of every step taken by any agent in the environment, including across an entire population.

AgileRL offers an order of magnitude speed up in hyperparameter optimization vs popular reinforcement learning training frameworks combined with Optuna. Remove the need for multiple training runs and save yourself hours.

AgileRL also supports multi-agent reinforcement learning using the Petting Zoo-style (parallel API). The charts below highlight the performance of our MADDPG and MATD3 algorithms with evolutionary hyper-parameter optimisation (HPO), benchmarked against epymarl's MADDPG algorithm with grid-search HPO for the simple speaker listener and simple spread environments.

Tutorials

We are constantly updating our tutorials to showcase the latest features of AgileRL and how users can leverage our evolutionary HPO to achieve 10x faster hyperparameter optimization. Please see the available tutorials below.

Tutorial Type Description Tutorials
Single-agent tasks Guides for training both on and off-policy agents to beat a variety of Gymnasium environments. PPO - Acrobot
TD3 - Lunar Lander
Rainbow DQN - CartPole
Multi-agent tasks Use of PettingZoo environments such as training DQN to play Connect Four with curriculum learning and self-play, and for multi-agent tasks in MPE environments. DQN - Connect Four
MADDPG - Space Invaders
MATD3 - Speaker Listener
Hierarchical curriculum learning Shows how to teach agents Skills and combine them to achieve an end goal. PPO - Lunar Lander
Contextual multi-arm bandits Learn to make the correct decision in environments that only have one timestep. NeuralUCB - Iris Dataset
NeuralTS - PenDigits
Custom Modules & Networks Learn how to create custom evolvable modules and networks for RL algorithms. Dueling Distributional Q Network
EvolvableSimBa

Evolvable algorithms (more coming soon!)

Single-agent algorithms

RL Algorithm
On-Policy Proximal Policy Optimization (PPO)
Off-Policy Deep Q Learning (DQN)
Rainbow DQN
Deep Deterministic Policy Gradient (DDPG)
Twin Delayed Deep Deterministic Policy Gradient (TD3)
Offline Conservative Q-Learning (CQL)
Implicit Language Q-Learning (ILQL)

Multi-agent algorithms

RL Algorithm
Multi-agent Multi-Agent Deep Deterministic Policy Gradient (MADDPG)
Multi-Agent Twin-Delayed Deep Deterministic Policy Gradient (MATD3)

Contextual multi-armed bandit algorithms

RL Algorithm
Bandits Neural Contextual Bandits with UCB-based Exploration (NeuralUCB)
Neural Contextual Bandits with Thompson Sampling (NeuralTS)

Train an agent to beat a Gym environment

Before starting training, there are some meta-hyperparameters and settings that must be set. These are defined in INIT_HP, for general parameters, and MUTATION_PARAMS, which define the evolutionary probabilities, and NET_CONFIG, which defines the network architecture. For example:

INIT_HP = {
    'ENV_NAME': 'LunarLander-v2',   # Gym environment name
    'ALGO': 'DQN',                  # Algorithm
    'DOUBLE': True,                 # Use double Q-learning
    'CHANNELS_LAST': False,         # Swap image channels dimension from last to first [H, W, C] -> [C, H, W]
    'BATCH_SIZE': 256,              # Batch size
    'LR': 1e-3,                     # Learning rate
    'MAX_STEPS': 1_000_000,         # Max no. steps
    'TARGET_SCORE': 200.,           # Early training stop at avg score of last 100 episodes
    'GAMMA': 0.99,                  # Discount factor
    'MEMORY_SIZE': 10000,           # Max memory buffer size
    'LEARN_STEP': 1,                # Learning frequency
    'TAU': 1e-3,                    # For soft update of target parameters
    'TOURN_SIZE': 2,                # Tournament size
    'ELITISM': True,                # Elitism in tournament selection
    'POP_SIZE': 6,                  # Population size
    'EVO_STEPS': 10_000,            # Evolution frequency
    'EVAL_STEPS': None,             # Evaluation steps
    'EVAL_LOOP': 1,                 # Evaluation episodes
    'LEARNING_DELAY': 1000,         # Steps before starting learning
    'WANDB': True,                  # Log with Weights and Biases
}
MUTATION_PARAMS = {
    # Relative probabilities
    'NO_MUT': 0.4,                              # No mutation
    'ARCH_MUT': 0.2,                            # Architecture mutation
    'NEW_LAYER': 0.2,                           # New layer mutation
    'PARAMS_MUT': 0.2,                          # Network parameters mutation
    'ACT_MUT': 0,                               # Activation layer mutation
    'RL_HP_MUT': 0.2,                           # Learning HP mutation
    'MUT_SD': 0.1,                              # Mutation strength
    'RAND_SEED': 1,                             # Random seed
}
NET_CONFIG = {
    'latent_dim': 16

    'encoder_config': {
      'hidden_size': [32]     # Observation encoder configuration
    }

    'head_config': {
      'hidden_size': [32]     # Network head configuration
    }

}

First, use utils.utils.create_population to create a list of agents - our population that will evolve and mutate to the optimal hyperparameters.

import torch
from agilerl.utils.utils import (
    make_vect_envs,
    create_population,
    observation_space_channels_to_first
)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

num_envs = 16
env = make_vect_envs(env_name=INIT_HP['ENV_NAME'], num_envs=num_envs)

observation_space = env.single_observation_space
action_space = env.single_action_space
if INIT_HP['CHANNELS_LAST']:
    observation_space = observation_space_channels_to_first(observation_space)

agent_pop = create_population(
    algo=INIT_HP['ALGO'],                 # Algorithm
    observation_space=observation_space,  # Observation space
    action_space=action_space,            # Action space
    net_config=NET_CONFIG,                # Network configuration
    INIT_HP=INIT_HP,                      # Initial hyperparameters
    population_size=INIT_HP['POP_SIZE'],  # Population size
    num_envs=num_envs,                    # Number of vectorized environments
    device=device
)

Next, create the tournament, mutations and experience replay buffer objects that allow agents to share memory and efficiently perform evolutionary HPO.

from agilerl.components.replay_buffer import ReplayBuffer
from agilerl.hpo.tournament import TournamentSelection
from agilerl.hpo.mutation import Mutations

field_names = ["state", "action", "reward", "next_state", "done"]
memory = ReplayBuffer(
    memory_size=INIT_HP['MEMORY_SIZE'],   # Max replay buffer size
    field_names=field_names,              # Field names to store in memory
    device=device,
)

tournament = TournamentSelection(
    tournament_size=INIT_HP['TOURN_SIZE'], # Tournament selection size
    elitism=INIT_HP['ELITISM'],            # Elitism in tournament selection
    population_size=INIT_HP['POP_SIZE'],   # Population size
    eval_loop=INIT_HP['EVAL_LOOP'],        # Evaluate using last N fitness scores
)

mutations = Mutations(
    no_mutation=MUTATION_PARAMS['NO_MUT'],                # No mutation
    architecture=MUTATION_PARAMS['ARCH_MUT'],             # Architecture mutation
    new_layer_prob=MUTATION_PARAMS['NEW_LAYER'],          # New layer mutation
    parameters=MUTATION_PARAMS['PARAMS_MUT'],             # Network parameters mutation
    activation=MUTATION_PARAMS['ACT_MUT'],                # Activation layer mutation
    rl_hp=MUTATION_PARAMS['RL_HP_MUT'],                   # Learning HP mutation
    mutation_sd=MUTATION_PARAMS['MUT_SD'],                # Mutation strength
    rand_seed=MUTATION_PARAMS['RAND_SEED'],               # Random seed
    device=device,
)

The easiest training loop implementation is to use our train_off_policy() function. It requires the agent have methods get_action() and learn().

from agilerl.training.train_off_policy import train_off_policy

trained_pop, pop_fitnesses = train_off_policy(
    env=env,                                   # Gym-style environment
    env_name=INIT_HP['ENV_NAME'],              # Environment name
    algo=INIT_HP['ALGO'],                      # Algorithm
    pop=agent_pop,                             # Population of agents
    memory=memory,                             # Replay buffer
    swap_channels=INIT_HP['CHANNELS_LAST'],    # Swap image channel from last to first
    max_steps=INIT_HP["MAX_STEPS"],            # Max number of training steps
    evo_steps=INIT_HP['EVO_STEPS'],            # Evolution frequency
    eval_steps=INIT_HP["EVAL_STEPS"],          # Number of steps in evaluation episode
    eval_loop=INIT_HP["EVAL_LOOP"],            # Number of evaluation episodes
    learning_delay=INIT_HP['LEARNING_DELAY'],  # Steps before starting learning
    target=INIT_HP['TARGET_SCORE'],            # Target score for early stopping
    tournament=tournament,                     # Tournament selection object
    mutation=mutations,                        # Mutations object
    wb=INIT_HP['WANDB'],                       # Weights and Biases tracking
)

Citing AgileRL

If you use AgileRL in your work, please cite the repository:

@software{Ustaran-Anderegg_AgileRL,
author = {Ustaran-Anderegg, Nicholas and Pratt, Michael and Sabal-Bermudez, Jaime},
license = {Apache-2.0},
title = {{AgileRL}},
url = {https://github.com/AgileRL/AgileRL}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agilerl-2.1.0.tar.gz (208.7 kB view details)

Uploaded Source

Built Distribution

agilerl-2.1.0-py3-none-any.whl (270.1 kB view details)

Uploaded Python 3

File details

Details for the file agilerl-2.1.0.tar.gz.

File metadata

  • Download URL: agilerl-2.1.0.tar.gz
  • Upload date:
  • Size: 208.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.11.11 Darwin/24.3.0

File hashes

Hashes for agilerl-2.1.0.tar.gz
Algorithm Hash digest
SHA256 423e986f42614a4b9eab0d53af80d2236b9354f203ba13364e2e2db6faf3cbb5
MD5 3ba09075c84906f48276a288affc84c9
BLAKE2b-256 219f156ae60c0edb5b20941f8ef283c0983133bfc6095b20b7d51f47404e1a65

See more details on using hashes here.

File details

Details for the file agilerl-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: agilerl-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 270.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.11.11 Darwin/24.3.0

File hashes

Hashes for agilerl-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8f4459775a75b6db673827fa8d3c6075ea0470f3fd83454948ad04039851f4a5
MD5 d4fb9a2b3e43dd398e7d6c3a7b73a919
BLAKE2b-256 7d0c72f0e67487c3616517f32670629bb4d2cb0e3ea796ff6ceb4b36f9b1df01

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page