Interoperate among reinforcement learning libraries with jax, pytorch, gym and dm_env
Project description
Helx: Interoperating between Reinforcement Learning Experimental Protocols
Helx provides a single interface to:
a. interoperate between a variety of Reinforcement Learning (RL) environments and b. interact with them through a unified agent interface.
It is designed to be agnostic to both the environment library (e.g., gym
, dm_control
) and the agent library (e.g., pytorch
, jax
, tensorflow
).
Why using helx
? It allows to easily switch between different RL libraries, and to easily test your agents on different environments.
Installation
pip install git+https://github.com/epignatelli/helx
If you also want to download the binaries for mujoco
, both gym
and dm_control
, and atari
:
helx-download-extras
And then tell the system where the mujoco binaries are:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/mujoco/lib
export MJLIB_PATH=/path/to/home/.mujoco/mujoco210/bin/libmujoco210.so
export MUJOCO_PY_MUJOCO_PATH=/path/to/home/.mujoco/mujoco210
Example
A typical use case is to design an agent, and toy-test it on catch
before evaluating it on more complex environments, such as atari, procgen or mujoco.
import bsuite
import gym
import helx.environment
import helx.experiment
import helx.agents
# create the enviornment in you favourite way
env = bsuite.load_from_id("catch/0")
# convert it to an helx environment
env = helx.environment.to_helx(env)
# create the agent
hparams = helx.agents.Hparams(env.obs_space(), env.action_space())
agent = helx.agents.Random(hparams)
# run the experiment
helx.experiment.run(env, agent, episodes=100)
Switching to a different environment is as simple as changing the env
variable.
import bsuite
import gym
import helx.environment
import helx.experiment
import helx.agents
# create the enviornment in you favourite way
-env = bsuite.load_from_id("catch/0")
+env = gym.make("procgen:procgen-coinrun-v0")
# convert it to an helx environment
env = helx.environment.to_helx(env)
# create the agent
hparams = helx.agents.Hparams(env.obs_space(), env.action_space())
agent = helx.agents.Random(hparams)
# run the experiment
helx.experiment.run(env, agent, episodes=100)
Supported libraries
We currently support these external environment models:
On the road:
Adding a new agent (helx.agents.Agent
)
An helx
agent interface is designed as the minimal set of functions necessary to (i) interact with an environment and (ii) reinforcement learn.
class Agent(ABC):
"""A minimal RL agent interface."""
@abstractmethod
def sample_action(self, timestep: Timestep) -> Array:
"""Applies the agent's policy to the current timestep to sample an action."""
@abstractmethod
def update(self, timestep: Timestep) -> Any:
"""Updates the agent's internal state (knowledge), such as a table,
or some function parameters, e.g., the parameters of a neural network."""
Adding a new environment library (helx.environment.Environment
)
To add a new library requires three steps:
- Implement the
helx.environment.Environment
interface for the new library. See the dm_env implementation for an example. - Implement serialisation (to
helx
) of the following objects:helx.environment.Timestep
helx.spaces.Discrete
helx.spaces.Continuous
- Add the new library to the
helx.environment.to_helx
function to tellhelx
about the new protocol.
Cite
If you use helx
please consider citing it as:
@misc{helx,
author = {Pignatelli, Eduardo},
title = {Helx: Interoperating between Reinforcement Learning Experimental Protocols},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/epignatelli/helx}}
}
A note on maintainance
This repository was born as the recipient of personal research code that was developed over the years. Its maintainance is limited by the time and the resources of a research project resourced with a single person. Even if I would like to automate many actions, I do not have the time to maintain the whole body of automation that a well maintained package deserves. This is the reason of the WIP badge, which I do not plan to remove soon. Maintainance will prioritise the code functionality over documentation and automation.
Any help is very welcome. A quick guide to interacting with this repository:
- If you find a bug, please open an issue, and I will fix it as soon as I can.
- If you want to request a new feature, please open an issue, and I will consider it as soon as I can.
- If you want to contribute yourself, please open an issue first, let's discuss objective, plan a proposal, and open a pull request to act on it.
If you would like to be involved further in the development of this repository, please contact me directly at: edu dot pignatelli at gmail dot com
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file HELX-envs-1.0.0.tar.gz
.
File metadata
- Download URL: HELX-envs-1.0.0.tar.gz
- Upload date:
- Size: 22.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c6ca7dc1167606e31e2907df3f911f98aaed9524d348a59791d2dc12d11c609d |
|
MD5 | 0a9341ca16643a915028e7333507d61c |
|
BLAKE2b-256 | 7f9d3f12a157e9095f438b82ea06d627a0dd33f7c0dba71386fea35a0461a495 |
File details
Details for the file HELX_envs-1.0.0-py2.py3-none-any.whl
.
File metadata
- Download URL: HELX_envs-1.0.0-py2.py3-none-any.whl
- Upload date:
- Size: 24.8 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5e2e41847ce20a7eb42df74fbd5ab8946defcc6120cd8cb77e6384bee03d3c82 |
|
MD5 | 9a77bc7adce9b28d64e7673e412faaa9 |
|
BLAKE2b-256 | a5382f8c6abba96bfeb6d3aa65da8ececb689a1352faa9a6ca7895465a4b3879 |