Skip to main content

MultiscaleRun is a Python package to run brain cells simulation at different scales. It orchestrates the coupling between several brain simulators like Neuron and STEPS but also solvers like AstroVascPy for the cerebral blood flow. The package also embeds a Julia solver to simulate the astrocytes activity.

Project description

MultiscaleRun

MultiscaleRun is an orchestrator of simulators. Currently, only Neurodamus (NEURON) and Metabolism are used together in a dual run, with more integrations planned for the future. It uses the NEURON simulator for neuronal activity, coupled with a metabolism solver.

Testing for Development

Prerequisites

Setup

You just need to run the setup script at least once before running the simulation.

With Spack (requires OBI spack installation):

source setup.sh

The script does:

  • set various env variables
  • create a spackenv folder with the necessary dependencies
  • create a python virtual env in venv
  • call pip install -e . for development
  • create the test folder tiny_CI_test
  • fill it with the necessary data

If a folder is present (spackenv, venv) the script skips that installation step assuming that is already done. If any of the folders are missing, the script redoes the setup.

The environment is still set as it is needed.

You can always modify them and recall the setup script. It will not override your changes.

Without Spack:

Mac:

In this case we leverage brew. First we need to install a few things:

brew install cmake openmpi hdf5-mpi python@3.12 ninja

We also need to link python3:

ln -sf /opt/homebrew/bin/python3.12 /opt/homebrew/bin/python3

Ubuntu (azure):

sudo apt-get update
sudo apt-get install -y mpich libmpich-dev libhdf5-mpich-dev hdf5-tools flex libfl-dev bison ninja-build libreadline-dev

Amazon Linux 2023 (aws):

sudo dnf update -y
sudo dnf -y install bison cpp cmake gcc-c++ flex flex-devel git python3.11-devel python3-devel python3-pip readline-devel ninja-build openmpi openmpi-devel

This distro does not have openmpi. We need to use the efa installer:

cd /tmp
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
tar xf aws-efa-installer-latest.tar.gz
cd aws-efa-installer
sudo ./efa_installer.sh -y --skip-kmod --mpi=openmpi5
cd -
rm -rf /tmp/aws*

Set python 3.11 as default (select 2):

sudo alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
sudo alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo alternatives --config python3

This distro does not have hdf5. We install it:

export PATH=/opt/amazon/openmpi5/bin:$PATH
export LD_LIBRARY_PATH=/opt/amazon/openmpi5/lib64:$LD_LIBRARY_PATH
export CC=$(which mpicc)
export CXX=$(which mpicxx)
export MPICC=$(which mpicc)
cd /tmp
curl -O https://support.hdfgroup.org/releases/hdf5/v1_14/v1_14_6/downloads/hdf5-1.14.6.tar.gz
tar xf hdf5-1.14.6.tar.gz
cd hdf5-1.14.6
./configure --enable-parallel --enable-shared --prefix=/opt/circuit_simulation/hdf5/hdf5-1.14.6/install
make -j
sudo make install
cd
rm -rf /tmp/hdf5*

The rest of the installation is common for all the architectures (mac, ubuntu, alma linux). Finally, you need to run this at least once before running simulations:

source setup_no_spack.sh

The script does:

  • set various env variables
  • create a python virtual env in venv with neuron and neurodamus
  • build libsonatareport
  • build the correct neurodamus-models
  • call pip install -e . for development
  • create the test folder tiny_CI_test
  • fill it with the necessary data

If a folder is present (libsonatareport, neurodamus-models, venv) the script skips that installation step assuming that is already done. If any of the folders are missing, the script redoes the setup.

The environment is still set as it is needed.

You can always modify them and recall the setup script. It will not override your changes.

Unit Tests

Just run with pytest:

pytest tests/unit

Formatting

Use ruff:

ruff check --fix

Integration Test

You just need to go to tiny_CI_test and run. The simulation is too slow with just one core. I suggest at least 8 cores. Do not go above 90 for now as this leaves some cores without neurons (edge case that I did not check).

cd tiny_CI_test
mpirun -np 12 multiscale-run compute

Note

At the moment this simulation depleates atpi and fails after 300 ms. TODO: fix it.

Postprocessing

After the simulation has completed you can check the results with the postproc jupyter notebook. It is already in the current folder. Just run jupyter:

jupyter lab

open postproc.ipynb and run. By default it presents all the traces for the gids [0, 1, 2]. The notebook should be self-explainatory and can be changed at will.

Docs

Build the documentation locally with:

sphinx-build -W --keep-going docs docs/build/html

Alternatively, check the official documentation at: https://multiscalerun.readthedocs.io/stable/

Azure

To run on Azure, request a VM from Erik. Once you have the credentials:

  1. SSH into the VM.
  2. Install the dependencies by following the Setup section (Linux).
  3. Run the simulation.
  4. Start post-processing on the VM:
    jupyter lab --no-browser --port=8888
    
    In parallel, on your local machine, create an SSH tunnel:
    ssh -L 8888:localhost:8888 <user>@<remote-host>
    
    Then open Jupyter in your local browser at http://localhost:8888

Authors

Polina Shichkova, Alessandro Cattabiani, Christos Kotsalos, and Tristan Carel

Acknowledgment

The development of this software was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government's ETH Board of the Swiss Federal Institutes of Technology.

Copyright (c) 2005-2023 Blue Brain Project/EPFL Copyright (c) 2025 Open Brain Institute

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multiscale_run-0.9.0.tar.gz (3.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multiscale_run-0.9.0-py3-none-any.whl (1.9 MB view details)

Uploaded Python 3

File details

Details for the file multiscale_run-0.9.0.tar.gz.

File metadata

  • Download URL: multiscale_run-0.9.0.tar.gz
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for multiscale_run-0.9.0.tar.gz
Algorithm Hash digest
SHA256 1d3e36980de315282237b6b173661719f2c901bc9bf6ec86ce5073e612bb2399
MD5 d6a68ba191eb56e2112548178bd4d4e3
BLAKE2b-256 48bab4cb1c6de47d421e4ada1502ca0c8911b6e631c2d0785759d5448b080f49

See more details on using hashes here.

Provenance

The following attestation bundles were made for multiscale_run-0.9.0.tar.gz:

Publisher: publish.yml on openbraininstitute/MultiscaleRun

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file multiscale_run-0.9.0-py3-none-any.whl.

File metadata

  • Download URL: multiscale_run-0.9.0-py3-none-any.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for multiscale_run-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b09dd1cf1b5e78c2cb0138d9f9e57782c2848f738bf289f37692b31dffa8b671
MD5 a813fbe445d1ea03ce5c004b7ec55b6d
BLAKE2b-256 4a275e57dc84bea75c347faa2cfdb9f035300e35442b05898b821c531a70d9f9

See more details on using hashes here.

Provenance

The following attestation bundles were made for multiscale_run-0.9.0-py3-none-any.whl:

Publisher: publish.yml on openbraininstitute/MultiscaleRun

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page