Skip to main content

The standardised environment for the hippocampus and entorhinal cortex models

Project description

NeuralPlayground: The standardised environment for the hippocampus and entorhinal cortex models.

All Contributors

1. Introduction

The abstract representation of space has been extensively studied in the hippocampus and entorhinal cortex in part due to the easy monitoring of the task and neural recording. A growing variety of theoretical models have been proposed to capture the rich neural and behavioral phenomena associated with these circuits. However, objective comparison of these theories against each other and against empirical data is difficult.

Although the significance of virtuous interaction between experiments and theory is widely recognized, the tools available to facilitate comparison are limited. Some important challenge to standardized coparaison are the

  1. Lack availability and accessibility of data in a standardized, labeled format.

  2. Lack of standard or easy ways for models to interact with the task.

  3. Lack of standard or easy ways to compare model output with empirical data.

To address this gap, we present an open-source standardised software framework - NeuralPlayground - to enable adjudication between the hippocampus and entorhinal cortex models. This Python software package offers a reproducible way to compare models against a centralised library of published experimental results, including neural recordings and animal behavior. The framework currently contains implementations of three Agents, as well as three Experiments providing simple interfaces to publicly available neural and behavioral data. It also contains a customizable 2-dimensional Arena (continuous and discrete) able to produce common experimental environments in which the agents can move in and interact with. We note that each module can also be used separately, allowing flexible access to influential models and data sets.

We currently rely on visual comparison of a hand-selected number of outputs of the model with neural recordings as shown in github.com/NeuralPlayground/examples/comparison. In the future, a set of quantitative measures and qualitative measures will be added for systematic comparisonsk from any Agent, Arena, Experiments.We want to restate that this won’t constitute an objective judgment of the quality of an Agent to replicate the brain mechanism. Instead, this only allows an objective and complete comparison to the current evidence in the field, as is typically done in publications.

Altogether, we hope our framework, available at github.com/NeuralPlayground, offers a foundation that the community will build upon, working toward a shared, standardized, open, and reproducible computational understanding of the hippocampus and entorhinal cortex.

Try our short tutorial online in Colab. Open In Colab

2. Installation

Create a conda environment

We advise you to install the package in a virtual environment, to avoid conflicts with other packages. For example, using conda:

conda create --name NPG-env python=3.10
conda activate NPG-env

Pip install

You can use pip get the latest release of NeuralPlayground from PyPI.

# install the latest release
pip install NeuralPlayground

# upgrade to the latest release
pip install -U NeuralPlayground

# install a particular release
pip install NeuralPlayground==0.0.1

Install for development

If you want to contribute to the project, get the latest development version from GitHub, and install it in editable mode, including the "dev" dependencies:

git clone https://github.com/SainsburyWellcomeCentre/NeuralPlayground/
cd NeuralPlayground
pip install -e '.[dev]'

3. Project

Try our package! We are gathering opinions to focus our efforts on improving aspects of the code or adding new features, so if you tell us what you would like to have, we might just implement it ;) Please refer to the Roadmap to understand the state of the project and get an idea of the direction it is going in. This open-source software was built to be collaborative and lasting. We hope that the framework will be simple enough to be adopted by a great number of neuroscientists, eventually guiding the path to the computational understanding of the HEC mechanisms. We follow reproducible, inclusive, and collaborative project design guidelines. All relevant documents can be found in Documents.

How to run a single module

Each module can be used separately to easily explore and analyze experimental data and better understand any implemented model. Additionally, different Arenas can be initialised with artificial architectures or with data from real-life experiments. We provide examples of module instantiation in the detailed jupyter notebooks found in Examples_experiment, Examples_arena and Examples_agents.

How to run interactions between modules

As shown in the jupyter notebooks Examples_agent, the Agent can interact with an Arena in a standard RL framework. The first step is to initialise an Agent and Arena of your choice. The Agent can be thought of as the animal performing the Experiment and the Arena as the experimental setting where the animal navigates and performs a task.

How to run comparisons

As shown in the jupyter notebooks Examples_comparison. We show visual comparisons between results from agents running with experimental behavior and results from the real experiment.

Check our Tolman-Eichenbaum Machine Implementation in this branch (work in progress).

4. I-want-to-Contribute

There are many ways to contribute to the github.com/NeuralPlayground/examples/comparison.

  1. Implement a hippocampal and entorhinal cortex Agent of your choice.

  2. Work on improving the Arena.

  3. Add an Experimental data set.

All contributions should be submitted through a pull request that we will later access. Before sending a pull request, make sure you have the following:

  1. Checked the Licensing frameworks.

  2. Followed the PEP8 and numpy docstring style convention. More details found in Style Guide.

  3. Implemented and ran Test.

  4. Comment your work.

All contributions to the repository are acknowledged through the all-contributors bot. Refer to the README.md files found in each of the modules for further details on how to contribute to them.

5. Cite

See Citation for the correct citation of this framework.

6. License

More details about the license can be found at Licence.

Contributors ✨

Thanks goes to these wonderful people (emoji key):

Clementine Domine
Clementine Domine

🎨 🧑‍🏫 💻 🔣
rodrigcd
rodrigcd

🎨 🧑‍🏫 💻 🔣
Luke Hollingsworth
Luke Hollingsworth

📖 💻
Andrew Saxe
Andrew Saxe

🧑‍🏫
DrCaswellBarry
DrCaswellBarry

🧑‍🏫
Niko Sirmpilatze
Niko Sirmpilatze

🚇 🚧
Adam Tyson
Adam Tyson

🚧 🚇

This project follows the all-contributors specification. Contributions of any kind are welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

NeuralPlayground-0.0.2.tar.gz (54.4 kB view details)

Uploaded Source

Built Distribution

NeuralPlayground-0.0.2-py3-none-any.whl (55.6 kB view details)

Uploaded Python 3

File details

Details for the file NeuralPlayground-0.0.2.tar.gz.

File metadata

  • Download URL: NeuralPlayground-0.0.2.tar.gz
  • Upload date:
  • Size: 54.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for NeuralPlayground-0.0.2.tar.gz
Algorithm Hash digest
SHA256 3c270b69657a305a5e9a7487bf02ff2532a515e550747ff7c90ba9ead86a4474
MD5 58be4f22934412f65aef62d8293ed2a2
BLAKE2b-256 6686acd49e78ead6b03f9396b43bb21d9a23df20fb4114a5943b4d5fce9dc680

See more details on using hashes here.

File details

Details for the file NeuralPlayground-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for NeuralPlayground-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 19c87e787ae5fb89dd34e59796b79734ac493cb9831b45242742a5883ea68dde
MD5 4dd33d6fa0cdca3c9d2d5559b2585687
BLAKE2b-256 41b1fe1c437f0e1111b70775bbf93a3bda3a30e017e44f5664635d78868ec205

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page