Hierarchical reinforcement learning framework which uses a directed graph to define the hierarchy.
Project description
Graph-RL
Graph-RL is a hierarchical reinforcement learning (HRL) framework that emphasizes modularity and flexibility. The user can define a hierarchy by specifying a directed graph where the nodes correspond to what is usually referred to as level or layer in the literature. Each node consists of an algorithm (responsible for learning), a subtask (responsible for observation, goal and reward generation) and a policy (generates the output of the node). This design allows the user to tailor a hierarchy to a reinforcement learning environment and unlocks the modularity inherent in HRL.
When evaluating the hierarchical policy, the graph is traversed from the root node to one of the leaf nodes. In general, the output of the parent node modulates the policy of the child node. In case of more than one child node, the policy of the parent node chooses which edge to follow. When a leaf node is reached, an atomic action from the action space of the environment is sampled.
The child nodes that have been traversed in this forward pass can then report back feedback to their parent nodes. This backward pass enables hindsight operations during learning (e.g. hindsight action relabeling).
Every node that has been traversed (starting from the leaf node that sampled the atomic action) can furthermore decide whether to return control to its parent or stay active (in which case the forward pass in the next environment step will start there). Control can also be reclaimed by every traversed parent node, e.g., if it achieved its subgoal.
Installation
With python3.7 or higher run
pip install graph_rl
Usage
Using Graph-RL requires specifying the hierarchy via a graph. A way to get started quickly is to use graph classes that automatically generate the whole graph when provided with the subtask specifications for each node. Alternatively, a graph can be constructed manually by instantiating nodes and defining parent child relations. Please refer to the scripts provided in the demos
folder for further information.
At the moment the algorithms "Hierarchical Actor-Critic" (HAC) and "Hierarchical reinforcement learning with Timed Subgoals" (HiTS) are available.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file Graph_RL-0.1.2.tar.gz
.
File metadata
- Download URL: Graph_RL-0.1.2.tar.gz
- Upload date:
- Size: 42.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c27150e0c7f822b29a971a4d4bc327b86db06b9dda54f2b04f944a38d6e929f2 |
|
MD5 | 32606d4d6304875bac3a9f229b6a38c5 |
|
BLAKE2b-256 | 36532fb96854ffeded9e2b6425bd54e140fa74e235122749a227a0b85d896a04 |
File details
Details for the file Graph_RL-0.1.2-py3-none-any.whl
.
File metadata
- Download URL: Graph_RL-0.1.2-py3-none-any.whl
- Upload date:
- Size: 68.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a31c627cb8a3a5e8937e115a0a5fa06549f5ffac4c470e09186d299cabaa8571 |
|
MD5 | da023c26bf39eeed40f1a10df7694aa2 |
|
BLAKE2b-256 | f6856692f5714ed61639775ecf821874a74d0b75cae4044c4a79a67dcfa98390 |