Skip to main content

A modular reinforcement learning library

Project description

🍒 emote

Embark's Modular Training Engine - a flexible framework for reinforcement learning

Embark Embark Build status Docs status pdm-managed

🚧 This project is very much work in progress and not yet ready for production use. 🚧

What it does

Emote provides a way to build reusable components for creating reinforcement learning algorithms, and a library of premade componenents built in this way. It is strongly inspired by the callback setup used by Keras and FastAI.

As an example, let us see how the SAC, the Soft Actor Critic algorithm by Haarnoja et al. can be written using Emote. The main algorithm in SAC is given in Soft Actor-Critic Algorithms and Applications and looks like this:

Main SAC algorithm

Using the components provided with Emote, we can write this as

device = torch.device("cpu")
env = DictGymWrapper(AsyncVectorEnv(10 * [HitTheMiddle]))
table = DictObsTable(spaces=env.dict_space, maxlen=1000, device=device)
memory_proxy = TableMemoryProxy(table)
dataloader = MemoryLoader(table, 100, 2, "batch_size")

q1 = QNet(2, 1)
q2 = QNet(2, 1)
policy = Policy(2, 1)
ln_alpha = torch.tensor(1.0, requires_grad=True)
agent_proxy = FeatureAgentProxy(policy, device)

callbacks = [
    QLoss(name="q1", q=q1, opt=Adam(q1.parameters(), lr=8e-3)),
    QLoss(name="q2", q=q2, opt=Adam(q2.parameters(), lr=8e-3)),
    PolicyLoss(pi=policy, ln_alpha=ln_alpha, q=q1, opt=Adam(policy.parameters())),
    AlphaLoss(pi=policy, ln_alpha=ln_alpha, opt=Adam([ln_alpha]), n_actions=1),
    QTarget(pi=policy, ln_alpha=ln_alpha, q1=q1, q2=q2),
    SimpleGymCollector(env, agent_proxy, memory_proxy, warmup_steps=500),
    FinalLossTestCheck([logged_cbs[2]], [10.0], 2000),
]

trainer = Trainer(callbacks, dataloader)
trainer.train()

Here each callback in the callbacks list is its own reusable class that can readily be used for other similar algorithms. The callback classes themselves are very straight forward to write. As an example, here is the PolicyLoss callback.

class PolicyLoss(LossCallback):
    def __init__(
        self,
        *,
        pi: nn.Module,
        ln_alpha: torch.tensor,
        q: nn.Module,
        opt: optim.Optimizer,
        max_grad_norm: float = 10.0,
        name: str = "policy",
        data_group: str = "default",
    ):
        super().__init__(
            name=name,
            optimizer=opt,
            network=pi,
            max_grad_norm=max_grad_norm,
            data_group=data_group,
        )
        self.policy = pi
        self._ln_alpha = ln_alpha
        self.q1 = q
        self.q2 = q2

    def loss(self, observation):
        p_sample, logp_pi = self.policy(**observation)
        q_pi_min = self.q1(p_sample, **observation)
        # using reparameterization trick
        alpha = torch.exp(self._ln_alpha).detach()
        policy_loss = alpha * logp_pi - q_pi_min
        policy_loss = torch.mean(policy_loss)
        assert policy_loss.dim() == 0
        return policy_loss

Installation

For installation and environment handling we use pdm. Install it from pdm. After pdm is set up, set up and activate the emote environment by running

pdm install

or for a full developer installation with all the extra dependencies:

pdm install -d -G :all

Common problems

Torch won't install: Check that your python version is correct. Try deleting your .venv and recreating it with

pdm venv create 3.8
pdm install -G :all

Box2d complains: Box2d needs swig and python bindings. On apt-based systems try

sudo apt install swig
sudo apt install python3.8-dev

Python 3.8 is tricky to install: For Ubuntu based distros try adding the deadsnakes PPA.

Contribution

Contributor Covenant

We welcome community contributions to this project.

Please read our Contributor Guide for more information on how to get started. Please also read our Contributor Terms before you make any contributions.

Any contribution intentionally submitted for inclusion in an Embark Studios project, shall comply with the Rust standard licensing model (MIT OR Apache 2.0) and therefore be dual licensed as described below, without any additional terms or conditions:

License

This contribution is dual licensed under EITHER OF

at your option.

For clarity, "your" refers to Embark or any other licensee/user of the contribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

emote-rl-23.0.0.tar.gz (56.5 kB view details)

Uploaded Source

Built Distribution

emote_rl-23.0.0-py3-none-any.whl (60.6 kB view details)

Uploaded Python 3

File details

Details for the file emote-rl-23.0.0.tar.gz.

File metadata

  • Download URL: emote-rl-23.0.0.tar.gz
  • Upload date:
  • Size: 56.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.3.4 CPython/3.10.6

File hashes

Hashes for emote-rl-23.0.0.tar.gz
Algorithm Hash digest
SHA256 1ceb94c19fc97747322c5f656c77321a67d4468c1c205cbd0730ec8649351090
MD5 163782068e408771ab6fe8644efd7c84
BLAKE2b-256 bc03a0257926d9435e9a2bf567183f4cc263bca1e96fce1b407bbe7a5ff981bf

See more details on using hashes here.

File details

Details for the file emote_rl-23.0.0-py3-none-any.whl.

File metadata

  • Download URL: emote_rl-23.0.0-py3-none-any.whl
  • Upload date:
  • Size: 60.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.3.4 CPython/3.10.6

File hashes

Hashes for emote_rl-23.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b9f58e77bb4eb11d1712895aeda6de0c71f6f8df9c38ad6175cfa9596236f4a3
MD5 93b85ccb87112a78f2c3c81691f27d83
BLAKE2b-256 e71b3f9692baa6210673227c623f23fb4e8a3dc732db78dec36d9f9518a45801

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page