Skip to main content

Build Goal-driven Models of the Sensorimotor Cortex with Ease.

Project description



Anthropomorphic Goal-Oriented Robotic Control for Neuroscientific Modeling


AngoraPy is an open source modeling library for goal-oriented research in neuroscience. It provides a simple interface to train deep neural network models of the human brain on various, customizable, sensorimotor tasks, using reinforcement learning. It thereby empowers goal-driven modeling to surpass the sensory domain and enter that of sensori_motor_ control, closing the perception-action loop.

AngoraPy is designed to require no deeper understanding of reinforcement learning. It employs state-of-the-art machine learning techniques, optimized for distributed computation scaling from local workstations to high-performance computing clusters. We aim to hide as much of this under the hood of an intuitive, high-level API but preserve the option for customizing most aspects of the pipeline.

This library is developed as part of the Human Brain Project at CCN Maastricht. It is an effort to build software by neuroscientists, for neuroscientists. If you have suggestions, requests or questions, feel free to open an issue.

Manipulation Gif

:sparkles: Features

Supported Task Settings

  • Discrete Action Spaces (Categorical, MultiCategorical)
  • Continuous Action Spaces (Beta, Gaussian)
  • Discrete State Spaces
  • Continuous State Spaces

Supported Model Types

  • Recurrent Networks
  • Convolutional Networks
  • Recurrent+Convolutional Networks

Supported Model Training

  • Local Distributed Training
  • HPC Distributed Training

Training Backend

  • Proximal Policy Optimization
  • Asymmetric Policy/Value Networks
  • Truncated Backpropagation Through Time

Entrypoints & Deployment

  • PyPI Package
  • Docker files
  • Source code

📥 Installation

AngoraPy is available on PyPI. First, install requirements:

sudo apt install libopenmpi-dev
pip install --extra-index-url https://pypi.nvidia.com tensorrt-bindings==8.6.1 tensorrt-libs==8.6.1

Then install AngoraPy from pip.

pip install angorapy

From source

Alternatively, you can download this repository or the source code of any previous release or branch and install from source, using pip.

pip install -e .

This way, if you make changes to the source code, these will be recognized in the installation (without the need to reinstall).

Docker

Alternatively, you can install AngoraPy and all its dependencies in a docker container using the Dockerfile provided in this repository (/docker/Dockerfile). To this end, download the repository and build the docker image from the /docker directory:

sudo docker build -t angorapy:master https://github.com/ccnmaastricht/angorapy.git#master -f - < Dockerfile

To install different versions, replace #master in the source by the tag/branch of the respective version you want to install.

🚀 Getting Started

The scripts train.py, evaluate.py and observe.py provide ready-made scripts for training and evaluating an agent in any environment. With pretrain.py, it is possible to pretrain the visual component. benchmark.py provides functionality for training a batch of agents possibly using different configs for comparison of strategies.

Training an Agent

The train.py commandline interface provides a convenient entry-point for running all sorts of experiments using the builtin models and environments in angorapy. You can train an agent on any environment with optional hyperparameters. Additionally, a monitor will be automatically linked to the training of the agent. For more detail consult the README on monitoring.

Base usage of train.py is as follows:

python train.py ENV --architecture MODEL For instance, training LunarLanderContinuous-v2 using the deeper architecture is possible by running:

python train.py LunarLanderContinuous-v2 --architecture deeper For more advanced options like custom hyperparameters, consult

python train.py -h

Evaluating and Observing an Agent

There are two more entry points for evaluating and observing an agent: evaluate.py and observe.py. General usage is as follows

python evaluate.py ID Where ID is the agent's ID given when its created (train.py prints this outt, in custom scripts get it with agent.agent_id).

Writing a Training Script

To train agents with custom models, environments, etc. you write your own script. The following is a minimal example:

from angorapy import make_task
from angorapy.models import get_model_builder
from angorapy.agent.ppo_agent import PPOAgent

env = make_task("LunarLanderContinuous-v2")
model_builder = get_model_builder("simple", "ffn")
agent = PPOAgent(model_builder, env)
agent.drill(100, 10, 512)

For more details, consult the examples.

🎓 Documentation

Detailed documentation of AngoraPy is provided in the READMEs of most subpackages. Additionally, we provide examples and tutorials that get you started with writing your own scripts using AngoraPy. For further readings on specific modules, consult the following READMEs:

If you are missing a documentation for a specific part of AngoraPy, feel free to open an issue and we will do our best to add it.

🔀 Distributed Computation

PPO is an asynchronous algorithm, allowing multiple parallel workers to generate experience independently. We allow parallel gathering and optimization through MPI. Agents will automatically distribute their workers evenly on the available CPU cores, while optimization is distributed over all available GPUs. If no GPUs are available, all CPUs share the task of optimizing.

Distribution is possible locally on your workstation and on HPC sites.

💻 Local Distributed Computing with MPI

To use MPI locally, you need to have a running MPI implementation, e.g. Open MPI 4 on Ubuntu. To execute train.py via MPI, run

mpirun -np 12 --use-hwthread-cpus python3 train.py ...

where, in this example, 12 is the number of locally available CPU threads and --use-hwthread-cpus makes available threads (as opposed to only cores). Usage of train.py is as described previously.

:cloud: Distributed Training on SLURM-based HPC clusters

Please note that the following is optimized and tested on the specific cluster we use, but should extend to at least any SLURM based setup.

On any SLURM-based HPC cluster you may submit your job with sbatch usising the following script template:

#!/bin/bash -l
#SBATCH --job-name="angorapy"
#SBATCH --account=xxx
#SBATCH --time=24:00:00
#SBATCH --nodes=32
#SBATCH --ntasks-per-core=1
#SBATCH --ntasks-per-node=12
#SBATCH --cpus-per-task=1
#SBATCH --partition=normal
#SBATCH --constraint=gpu&startx
#SBATCH --hint=nomultithread

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export CRAY_CUDA_MPS=1

# load virtual environment
source ${HOME}/robovenv/bin/activate

export DISPLAY=:0
srun python3 -u train.py ...

The number of parallel workers will equal the number of nodes times the number of CPUs per node (32 x 12 = 384 in the template above).

🔗 Citing AngoraPy

If you use AngoraPy for your research, please cite us as follows

Weidler, T., & Senden, M. (2020). AngoraPy: Anthropomorphic Goal-Oriented Robotic Control for Neuroscientific Modeling [Computer software] Or using bibtex

@software{angorapy2020, author = {Weidler, Tonio and Senden, Mario}, month = {3}, title = {{AngoraPy: Anthropomorphic Goal-Oriented Robotic Control for Neuroscientific Modeling}}, year = {2020} }

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

angorapy-0.10.8.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

angorapy-0.10.8-py3-none-any.whl (2.0 MB view details)

Uploaded Python 3

File details

Details for the file angorapy-0.10.8.tar.gz.

File metadata

  • Download URL: angorapy-0.10.8.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.59.0 CPython/3.8.10

File hashes

Hashes for angorapy-0.10.8.tar.gz
Algorithm Hash digest
SHA256 588e8e981ae6580d6cc09e5e854a13a37c3a1ee786d888c65a7b7f522ce1e356
MD5 d530c97ef3dbf35ada61e3c7a7f85a86
BLAKE2b-256 79f94e0949c16e607dd43421b698d803123f8c5f3aaf99ac6fe07439f111351e

See more details on using hashes here.

File details

Details for the file angorapy-0.10.8-py3-none-any.whl.

File metadata

  • Download URL: angorapy-0.10.8-py3-none-any.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.59.0 CPython/3.8.10

File hashes

Hashes for angorapy-0.10.8-py3-none-any.whl
Algorithm Hash digest
SHA256 e1db537ccf27df58acfc4b460b45d968a30426829db6f7df1eaedfe4f6249daf
MD5 84d20da0dfabde87075f16bcf80d9597
BLAKE2b-256 22920a0f10793cbdfee4167f12e56be49a28eb9e8570804d2de17d10a2aa0676

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page