Skip to main content

The standardised environment for the hippocampus and entorhinal cortex models

Project description

License CI codecov Ruff Code style: black pre-commit

All Contributors

NeuralPlayground

A standardised environment for the hippocampus and entorhinal cortex models. Open In Colab

NeuralPlayground Logo

Introduction

The abstract representation of space has been extensively studied in the hippocampus and entorhinal cortex. A growing variety of theoretical models have been proposed to capture the rich neural and behavioral phenomena associated with these circuits. However, objective comparison of these theories against each other and against empirical data is challenging.

Although the significance of virtuous interaction between experiments and theory is widely recognized, the tools available to facilitate comparison are limited. Some important challenge we aim to solve are:

  1. Lack availability and accessibility of data in a standardized, labeled format.

  2. Standard and easy ways to compare model output with empirical data.

  3. A repository of models and data sets that are relevant for the field of hippocampus and entorhinal cortex.

To address this gap, we present an open-source standardised software framework - NeuralPlayground - to enable comparison between the hippocampus and entorhinal cortex models. This Python software package offers a reproducible way to compare models against a centralised library of published experimental results, including neural recordings and animal behavior. The framework currently contains implementations of three Agents, as well as three Experiments providing simple interfaces to publicly available neural and behavioral data. It also contains a customizable 2-dimensional Arena (continuous and discrete) able to produce common experimental environments in which the agents can move in and interact with. Each module can also be used separately, allowing flexible access to influential models and data sets.

We currently rely on visual comparison of a hand-selected number of outputs of the model with neural recordings as shown in github.com/NeuralPlayground/examples/comparison. In the future, a set of quantitative and qualitative measures will be added for systematic comparisons across Agents, Arenas, and Experiments. We want to restate that this won't constitute a definitive judgment on the ability of an Agent to replicate the brain mechanism. Instead, this allows an objective and complete comparison to the current evidence in the field, as is typically done in publications, which can be used to guide model design and development.

Altogether, we hope our framework, offers a foundation that the community will build upon, working toward a shared, standardized, open, and reproducible computational understanding of the hippocampus and entorhinal cortex.

Try our short tutorial online in Colab. Open In Colab

Installation

Create a conda environment

We advise you to install the package in a virtual environment, to avoid conflicts with other packages. For example, using conda:

conda create --name NPG-env python=3.10
conda activate NPG-env
conda install pip

Pip install

You can use pip get the latest release of NeuralPlayground from PyPI.

# install the latest release
pip install NeuralPlayground

# upgrade to the latest release
pip install -U NeuralPlayground

# install a particular release
pip install NeuralPlayground==0.0.5

Note

If you wish to run our implementation of the Tolman-Eichenbaum machine, there are additional dependencies to install. These can be found in the TEM_README.md file.

Install for development

If you want to contribute to the project, get the latest development version from GitHub, and install it in editable mode, including the "dev" dependencies:

git clone https://github.com/SainsburyWellcomeCentre/NeuralPlayground/ --single-branch
cd NeuralPlayground
pip install -e .[dev]

Note

if you are using the zsh shell (default on macOS), replace the last command with:

pip install -e '.[dev]'

Usage

Try our package! We are gathering opinions to focus our efforts on improving aspects of the code or adding new features, so if you tell us what you would like to have, we might just implement it 😊. This open-source software was built to be collaborative and lasting. We hope that the framework will be simple enough to be adopted by a great number of neuroscientists, eventually guiding the path to the computational understanding of the HEC mechanisms. We follow reproducible, inclusive, and collaborative project design guidelines. All relevant documents can be found in Documents.

Agent Arena interaction

You can pick an Agent, an Arena of your choice to run a simulation. arenas and simulations have a simple interface to interact with each other as in OpenAI gymnasium.

# import an agent based on a plasticity model of grid cells
from neuralplayground.agents import Weber2018
# import a square 2D arena
from neuralplayground.arenas import Simple2D  

# Initialise the agent
agent = Weber2018()

# Initialise the arena
arena = Simple2D()

To make the agent interact with the arena, a very simple loop can be the following:

iterations = 1000
obs, state = arena.reset()
for j in range(iterations):
    # Observe to choose an action
    action = agent.act(obs)
    # Run environment for given action
    obs, state, reward = arena.step(action)
    # Update agent parameters
    update_output = agent.update()

This process is the base of our package. We provide a more detailed example in Open In Colab. Also, specific examples of how to use each module can be found in agent, arena and experiment jupyter notebooks.

Note

Check our Tolman-Eichenbaum Machine Implementation in this branch (work in progress), you will also need to install pytorch ro run it.

Simulation Manager

We provide some backend tools to run simulations and compare the results with experimental data in the background, including some methods to keep track of your runs, and a comparison board to visualise the results. You can check the details in Simulation Manager and Comparison Board jupyters. In addition, we have some default simulations you can try out, for which you don't need to write much code, since they are implemented using a SingleSim class. For example

# Import default simulation, which is a SingleSim 
from neuralplayground.backend.default_simulation import stachenfeld_in_2d
from neuralplayground.backend.default_simulation import weber_in_2d
stachenfeld_in_2d.run_sim(save_path="my_results")

This class allows you to run a simulation with a single line of code, and it will automatically save the results in a folder with the name you provide, keeping track of any errors and logs. You can also use our Simulation Manager to run multiple simulations at once, save the results, keep run of each run and possible errors for easy debugging, and other functions.

# Import Simulation Manager
from neuralplayground.backend import SimulationManager

# Initialise simulation manager
my_sims = [weber_in_2d, stachenfeld_in_2d]
my_manager = SimulationManager(simulation_list = my_sims,
                               runs_per_sim = 5,  # Run 5 instances per simulation
                               manager_id = "example_simulation",
                               verbose = True)
my_manager.generate_sim_paths()
my_manager.run_all()
my_manager.check_run_status()

To compare the results, use the comparison board described in Comparison Board jupyter notebook. With time, we will build an interface for easy model comparison and visualisation of the results!

NeuralPlayground Logo

I want to contribute

There are many ways to contribute to our project.

  1. Implement a hippocampal and entorhinal cortex Agent of your choice.

  2. Work on improving the Arena.

  3. Add an Experimental data set.

  4. Implementing metrics to compare the output of the Agent with the experimental data.

  5. Refactor the code to improve the readability and efficiency.

All contributions should be submitted through a pull request and will be reviewed by the maintainers. Before sending a pull request, make sure you have the done following:

  1. Checked the Licensing frameworks.

  2. Used the right development environment. Make sure to initialise the pre-commit hooks with pre-commit install and run pre-commit run -a to format the code and check for errors.

  3. Followed the PEP8 and numpy docstring style convention. More details found in Style Guide.

  4. Implemented and ran tests.

  5. Comment your work.

All contributions to the repository are acknowledged through the all-contributors bot. Refer to the README.md files found in each of the modules for further details on how to contribute to them.

Cite

See Citation for the correct citation of this framework.

License

⚖️ MIT

Contributors

Thanks go to these wonderful people (emoji key):

Clementine Domine
Clementine Domine

🎨 🧑‍🏫 💻 🔣
rodrigcd
rodrigcd

🎨 🧑‍🏫 💻 🔣
Luke Hollingsworth
Luke Hollingsworth

📖 💻
Andrew Saxe
Andrew Saxe

🧑‍🏫
DrCaswellBarry
DrCaswellBarry

🧑‍🏫
Niko Sirmpilatze
Niko Sirmpilatze

🚇 🚧 🔧
Adam Tyson
Adam Tyson

🚧 🚇
rhayman
rhayman

💻
Devon Jarvis
Devon Jarvis

📖 💻

This project follows the all-contributors specification. Contributions of any kind are welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

NeuralPlayground-0.0.6.tar.gz (414.8 kB view details)

Uploaded Source

Built Distribution

NeuralPlayground-0.0.6-py3-none-any.whl (125.6 kB view details)

Uploaded Python 3

File details

Details for the file NeuralPlayground-0.0.6.tar.gz.

File metadata

  • Download URL: NeuralPlayground-0.0.6.tar.gz
  • Upload date:
  • Size: 414.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for NeuralPlayground-0.0.6.tar.gz
Algorithm Hash digest
SHA256 813d6968cae455d669870b5387fe8973b590c838c4f43531b00243f2aa12726d
MD5 5d0a32c05409f9cc05925db4c8065c00
BLAKE2b-256 040ee25aee51d316f9dd53b42cad14f18f22fb953244a28f78ef08994734e6f7

See more details on using hashes here.

File details

Details for the file NeuralPlayground-0.0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for NeuralPlayground-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 1842b46167180f8d16c5fa4f302125a1b2d516ce557b7cc92eb3127448fec0b7
MD5 5c086ced62a3f61c27a6c02d5c8d812a
BLAKE2b-256 771ebe3a354cb1d18639860bcfc0c0f01fb91896049a6ef5aa3a53abd476c697

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page