Fast reinforcement learning research
Fast reinforcement learning research.
The goal of Embodied is to empower researchers to quickly implement new agents at scale. Embodied achieves this by specifying an interface both for environments and agents, allowing users to mix and match agents, envs, and evaluation protocols. Embodied provides common building blocks that users are encouraged to fork when more control is needed. The only dependency is Numpy and agents can be implemented in any framework.
embodied/ core/ # Config, logging, checkpointing, simulation, wrappers run/ # Evaluation protocols that combine agents and environments envs/ # Environment suites such as Gym, Atari, DMC, Crafter agents/ # Agent implementations
class Agent: __init__(obs_space, act_space, config) policy(obs, state=None, mode='train') -> act, state train(data, state=None) -> state, metrics report(data) -> metrics dataset(generator) -> generator init_policy(batch_size) -> state init_train(batch_size) -> state
class Env: __len__() -> int @obs_space -> dict of spaces @act_space -> dict of spaces step(action) -> obs dict render() -> array close()
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.