Jax version of Stable Baselines, implementations of reinforcement learning algorithms.
Project description
Stable Baselines Jax (SB3 + JAX = SBX)
See https://github.com/araffin/sbx
Proof of concept version of Stable-Baselines3 in Jax.
Implemented algorithms:
- Soft Actor-Critic (SAC) and SAC-N
- Truncated Quantile Critics (TQC)
- Dropout Q-Functions for Doubly Efficient Reinforcement Learning (DroQ)
- Proximal Policy Optimization (PPO)
- Deep Q Network (DQN)
- Twin Delayed DDPG (TD3)
- Deep Deterministic Policy Gradient (DDPG)
- Batch Normalization in Deep Reinforcement Learning (CrossQ)
Example
from sbx import DDPG, DQN, PPO, SAC, TD3, TQC, CrossQ
model = TQC("MlpPolicy", "Pendulum-v1", verbose=1)
model.learn(total_timesteps=10_000, progress_bar=True)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
sbx_rl-0.18.0.tar.gz
(44.2 kB
view hashes)
Built Distribution
sbx_rl-0.18.0-py3-none-any.whl
(57.1 kB
view hashes)