Skip to main content

NeuroGym: Gymnasium-style Cognitive Neuroscience Tasks

Project description

NeuroGym

Badges
fairness OpenSSF Best Practices fair-software.eu
package PyPI version
docs Documentation RSD DOI
tests build sonarcloud linkspector cffconvert linting static-typing workflow scq badge workflow scc badge
running on ubuntu
license github license badge

NeuroGym is a curated collection of neuroscience tasks with a common interface. The goal is to facilitate the training of neural network models on neuroscience tasks.

Table of Contents

This table is automatically kept up to date using the "Markdown All in One" extension

NeuroGym inherits from the machine learning toolkit Gymnasium, a maintained fork of OpenAI’s Gym library. It allows a wide range of well established machine learning algorithms to be easily trained on behavioral paradigms relevant for the neuroscience community. NeuroGym also incorporates several properties and functions (e.g. continuous-time and trial-based tasks) that are important for neuroscience applications. The toolkit also includes various modifier functions that allow easy configuration of new tasks.

Please see our extended project documentation for additional details.

alt tag

Installation

1. Create a Virtual Environment

Create and activate a virtual environment to install the current package, e.g. using conda (please refer to their site for questions about creating the environment):

conda activate # ensures you are in the base environment
conda create -n neurogym python=3.11 -y
conda activate neurogym

2. Install NeuroGym

Install the latest stable release of neurogym using pip:

pip install neurogym
2.1 Reinforcement Learning Support

NeuroGym includes optional reinforcement learning (RL) features via Stable-Baselines3. To install these, choose one of the two options below depending on your hardware setup:

pip install neurogym[rl]

NOTE for Linux/WSL users: If you do not have access to a CUDA-capable NVIDIA GPU (which is the case for most users), above line will install up to 1.5GB of unnecessary GPU libraries. To avoid excessive overhead, we recommend first isntalling the CPU-only version of PyTorch:

pip install torch --index-url https://download.pytorch.org/whl/cpu
pip install neurogym[rl]
2.2: Editable/Development Mode

To contribute to NeuroGym or run it from source with live code updates:

git clone https://github.com/neurogym/neurogym.git
cd neurogym
pip install -e .

This installs the package in editable mode, so changes in source files are reflected without reinstalling.

To include both RL and development tools (e.g., for testing, linting, documentation):

pip install -e .[rl,dev]

3. Psychopy Installation (Optional)

NOTE: psycohopy installation is currently not working

If you need psychopy for your project, additionally run

pip install psychopy

Tasks

Currently implemented tasks can be found here.

Wrappers

Wrappers (see their docs) are short scripts that allow introducing modifications the original tasks. For instance, the Random Dots Motion task can be transformed into a reaction time task by passing it through the reaction_time wrapper. Alternatively, the combine wrapper allows training an agent in two different tasks simultaneously.

Configuration

🧪 Beta Feature — The configuration system is optional and currently under development. You can still instantiate environments, agents, and wrappers with direct parameters. It is only used in a small portion of the codebase and is not required for typical usage. See the demo.ipynb notebook for the only current example of this system in action.

NeuroGym includes a flexible configuration mechanism using Pydantic Settings, allowing configuration via TOML files, Python objects, or plain dictionaries.

Using a TOML file can be especially useful for sharing experiment configurations in a portable way (e.g., sending config.toml to a colleague), reliably saving and loading experiment setups, and easily switching between multiple configurations for the same environment by changing just one line of code. While the system isn't at that stage yet, these are intended future capabilities.

1. From a TOML File

Create a config.toml file (see template) and load it:

from neurogym import Config
config = Config('path/to/config.toml')

You can then pass this config to any component that supports it:

from neurogym.wrappers import monitor
env = gym.make('GoNogo-v0')
env = monitor.Monitor(env, config=config)

Or directly pass the path:

env = monitor.Monitor(env, config='path/to/config.toml')

2. With Python Class

from neurogym import Config
config = Config(
    local_dir="logs/",
    env={"name": "GoNogo-v0"},
    monitor={"name": "MyMonitor"}
)

3. With a Dictionary

from neurogym import Config
config_dict = {
    "env": {"name": "GoNogo-v0"},
    "monitor": {
        "name": "MyMonitor",
        "plot": {"trigger": "step", "value": 500, "create": True}
    },
    "local_dir": "./outputs"
}
config = Config.model_validate(config_dict)

Examples

NeuroGym is compatible with most packages that use gymnasium. In this example jupyter notebook we show how to train a neural network with RL algorithms using the Stable-Baselines3 toolbox.

Vanilla RNN Support in RecurrentPPO

We extended the RecurrentPPO implementation from stable-baselines3-contrib to support vanilla RNNs (torch.nn.RNN) in addition to LSTMs. This is particularly useful for neuroscience applications, where simpler recurrent architectures can be more biologically interpretable.

You can enable vanilla RNNs by setting recurrent_layer_type="rnn" in the policy_kwargs:

from sb3_contrib import RecurrentPPO

policy_kwargs = {"recurrent_layer_type": "rnn"}  # "lstm" is the default
model = RecurrentPPO("MlpLstmPolicy", env_vec, policy_kwargs=policy_kwargs, verbose=1)
model.learn(5000)

Note: This feature is part of an open pull request to the upstream repository and is currently under review by the maintainers. Until the pull request is merged, you can use this functionality by installing NeuroGym organization's fork of the repository. To do so, uninstall the original package and install from the custom branch:

pip uninstall stable-baselines3-contrib -y
pip install git+https://github.com/neurogym/stable-baselines3-contrib.git@rnn_policy_addition

This will install the version with vanilla RNN support from the rnn_policy_addition branch in our fork.

Custom Tasks

Creating custom new tasks should be easy. You can contribute tasks using the regular gymnasium format. If your task has a trial/period structure, this template provides the basic structure that we recommend a task to have:

from gymnasium import spaces
import neurogym as ngym

class YourTask(ngym.PeriodEnv):
    metadata = {}

    def __init__(self, dt=100, timing=None, extra_input_param=None):
        super().__init__(dt=dt)


    def new_trial(self, **kwargs):
        """
        new_trial() is called when a trial ends to generate the next trial.
        Here you have to set:
        The trial periods: fixation, stimulus...
        Optionally, you can set:
        The ground truth: the correct answer for the created trial.
        """

    def _step(self, action):
        """
        _step receives an action and returns:
            a new observation, obs
            reward associated with the action, reward
            a boolean variable indicating whether the experiment has terminated, terminated
                See more at https://gymnasium.farama.org/tutorials/gymnasium_basics/handling_time_limits/#termination
            a boolean variable indicating whether the experiment has been truncated, truncated
                See more at https://gymnasium.farama.org/tutorials/gymnasium_basics/handling_time_limits/#truncation
            a dictionary with extra information:
                ground truth correct response, info['gt']
                boolean indicating the end of the trial, info['new_trial']
        """

        return obs, reward, terminated, truncated, {'new_trial': new_trial, 'gt': gt}

Acknowledgements

For the authors of the package, please refer to the zenodo DOI at the top of the page.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neurogym-2.1.0.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neurogym-2.1.0-py3-none-any.whl (133.3 kB view details)

Uploaded Python 3

File details

Details for the file neurogym-2.1.0.tar.gz.

File metadata

  • Download URL: neurogym-2.1.0.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for neurogym-2.1.0.tar.gz
Algorithm Hash digest
SHA256 aec1ea5f4c3db185e186510cfbf2175df36307346554414f0ccb9df61b4ebac4
MD5 d8f4e0667267f0984e649ad36ab4e8d4
BLAKE2b-256 dc9f1888e806dbf734bcc2bd4e1960495eef166d66535a258642dd773ba464a8

See more details on using hashes here.

File details

Details for the file neurogym-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: neurogym-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 133.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for neurogym-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8fafd9b4469251a8ad4936d84b90613ddba663377bcf79eff0e6658ff24590eb
MD5 ded678e02bb2aa8025c743d2650288c3
BLAKE2b-256 29a6c193a17e2e5655af5749f592b2ad2bf52445196968e20ac25d0738d6d972

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page