AI-powered assistant for spike sorting and neural data analysis
Project description
SpikeAgent
An AI-powered assistant for spike sorting and neural data analysis
SpikeAgent is a web-based AI assistant designed to help neuroscience laboratories analyze neural electrophysiology data. It provides an intuitive interface for spike sorting workflows, data curation, and neural data analysis, powered by state-of-the-art language models (OpenAI, Anthropic, and Google's Gemini).
What is SpikeAgent?
SpikeAgent automates and streamlines the spike sorting pipeline, from raw neural recordings to curated spike trains. It leverages AI to assist with:
- Spike sorting: Automated detection and classification of action potentials
- Data curation: AI-assisted quality control and unit validation
- Visual analysis: Vision-language models for analyzing spike sorting outputs
- Workflow guidance: Interactive assistance throughout the analysis pipeline
The tool integrates with SpikeInterface, a unified framework for spike sorting, providing a seamless experience for analyzing neural data from various recording systems.
Quick Start (5 Minutes)
What You Need
- Docker - Make sure Docker Desktop is installed and running (for Docker installation)
- OR Python 3.11+ (for pip installation)
- One API Key - Choose one of these:
- OpenAI API Key - OR -
- Anthropic API Key - OR -
- Google API Key
That's it! You only need one API key to get started.
Installation Options
SpikeAgent offers three ways to install and run:
Option 1: Pip Installation (Recommended for Development)
# Install from PyPI
pip install spikeagent
# Or install from source
git clone https://github.com/arnaumarin/SpikeAgent.git
cd SpikeAgent
pip install -e .
# Run the application
spikeagent
# or
python -m spikeagent.app.main
Option 2: Docker (Recommended for Production)
SpikeAgent offers two ways to run with Docker:
- CPU Version - Works on any computer, easiest to set up
- GPU Version - For systems with NVIDIA GPUs (needed for some spike sorters like Kilosort4)
Using Pre-built CPU Image (Easiest Method)
Step 1: Create a .env file
Create a file named .env in your working directory with your API keys. You need at least one of these:
# Create the .env file
touch .env
Then add your API keys to the .env file. Here are examples:
Example 1: Using OpenAI (Standard)
OPENAI_API_KEY=sk-your-actual-key-here
Example 2: Using OpenAI (Custom/Institutional Endpoint)
OPENAI_API_KEY=your_institution_key_here
OPENAI_API_BASE=https://your-institution-endpoint.com/v1
Example 3: Using Anthropic
ANTHROPIC_API_KEY=sk-ant-your-actual-key-here
Example 4: Using Google/Gemini
GOOGLE_API_KEY=your-google-api-key-here
You only need ONE of these options - choose the provider you prefer!
Important Notes:
- You only need one API key (OpenAI, Anthropic, or Google) - choose whichever you prefer
- If using a custom or institutional OpenAI endpoint, include both
OPENAI_API_KEYandOPENAI_API_BASE - If using standard OpenAI, you only need
OPENAI_API_KEY(noOPENAI_API_BASEneeded) - The
.envfile should be in the same directory where you run the Docker commands
Step 2: Run SpikeAgent
Option A: Using the automated script (Easiest):
# Run without volume mounts (if you don't need to access local data)
./run-spikeagent.sh
# Run with volume mounts (to access your data directories)
./run-spikeagent.sh /path/to/your/data /path/to/results
Volume Mounts (Optional but Recommended):
If you need SpikeAgent to access your local data files, you should mount your data directories when running the script:
# Mount a single data directory
./run-spikeagent.sh /path/to/your/raw/data
# Mount multiple directories (e.g., raw data and results folder)
./run-spikeagent.sh /path/to/raw/data /path/to/processed/results
# Mount relative paths (automatically converted to absolute)
./run-spikeagent.sh ./data ./results
Why mount volumes?
- SpikeAgent needs access to your raw electrophysiology data files
- You may want to save processed results to a specific location
- Config files (YAML) and other data should be accessible to the container
What paths should you mount?
- Raw data directory: Where your experimental data files are stored (e.g.,
.rhd, SpikeGLX files, etc.) - Results/output directory: Where you want processed data and results saved
- Config directory: If you have YAML configuration files (optional)
The script will:
- Pull the latest image from GitHub
- Mount your specified directories (if provided)
- Start the container
- Wait for the application to be ready
- Open your browser automatically
Option B: Manual Docker commands:
# Pull the latest CPU image
docker pull ghcr.io/arnaumarin/spikeagent-cpu:latest
# Run without volume mounts
docker run --rm -p 8501:8501 --env-file .env ghcr.io/arnaumarin/spikeagent-cpu:latest
# Run with volume mounts (to access your data)
docker run --rm -p 8501:8501 --env-file .env \
-v /path/to/your/data:/path/to/your/data \
-v /path/to/results:/path/to/results \
ghcr.io/arnaumarin/spikeagent-cpu:latest
Step 3: Access the application
Once the container is running, open your browser and go to:
http://localhost:8501
That's it! You're ready to use SpikeAgent.
GPU Version (Build Locally):
The GPU version is not yet available as a pre-built package. You need to build it locally:
# Build the GPU image
docker build -f dockerfiles/Dockerfile.gpu -t spikeagent:gpu .
# Create a .env file with your API keys, then run the container
docker run --rm --gpus all -p 8501:8501 --env-file .env spikeagent:gpu
Adding Volume Mounts After Startup:
If you need to access a data path that wasn't mounted when you started the container:
# Add mounts to existing container (preserves existing mounts)
./run-spikeagent.sh --add /path/to/new/data
# You can add multiple paths at once
./run-spikeagent.sh --add /path/to/data1 /path/to/data2
# Or restart with new mounts only (replaces existing mounts)
./run-spikeagent.sh --restart /path/to/data
The --add option will:
- Stop the current container
- Preserve all existing volume mounts
- Add your new paths
- Restart the container
Note: Docker containers cannot mount new volumes at runtime - a restart is required. The app UI will show you the exact command to run if it detects an unmounted path.
Troubleshooting:
- Port already in use? Make sure port 8501 is free, or stop any existing containers:
docker stop spikeagent - Can't pull image? The image is public, so no authentication needed. If you have issues, make sure Docker is running.
- ARM64/Apple Silicon (M1/M2/M3 Mac)? If you get "no matching manifest for linux/arm64" error, the run script will automatically detect this and build the image locally for you. The first build may take 10-20 minutes. Once multi-arch images are available, this will no longer be necessary.
- Path not accessible in app? If you see a warning about a path not being found, use
./restart-spikeagent-with-mounts.sh /path/to/datato add it. The app will show you the exact command to use. - API connection errors? Double-check your
.envfile has the correct API keys and is in the same directory as your Docker command.
Open Source Neural Data
You can test SpikeAgent with open datasets such as Neuropixels 2.0 chronic recordings in mice and AutoSort flexible electrode recordings.
Tutorials
The repository includes Jupyter notebook tutorials to help you get started:
-
tutorials/vlm_noise_rejection_tutorial.ipynb: Tutorial on using Vision Language Models (VLM) for AI-assisted spike curation- Classifying units as "Good" or "Bad" based on visual features
- Using waveforms, autocorrelograms, and spike locations for quality control
- Applying curation to filter out noise units
-
tutorials/vlm_merge_simple_tutorial.ipynb: Tutorial on using VLM for merge analysis- Finding potential merge candidates automatically
- Analyzing visual features using AI (crosscorrelograms, amplitude plots, PCA clustering)
- Making merge decisions based on AI analysis
- Applying merges to consolidate units from the same neuron
Project Structure
spikeagent/
├── src/spikeagent/ # Main source code package
│ ├── app/ # Application code
│ └── curation/ # Curation and VLM analysis tools
├── dockerfiles/ # Docker configuration files
│ ├── Dockerfile.cpu # CPU Docker image
│ └── Dockerfile.gpu # GPU Docker image
├── docs/ # Documentation
│ └── img/ # Documentation images
├── tutorials/ # Jupyter notebook tutorials
│ ├── vlm_merge_simple_tutorial.ipynb # VLM merge analysis tutorial
│ └── vlm_noise_rejection_tutorial.ipynb # VLM curation tutorial
└── tests/ # Test suite
Documentation
Comprehensive documentation is available in the docs/ directory:
- Installation Guide: Detailed setup and installation instructions
- User Guide: How to use SpikeAgent for spike sorting and curation
- API Reference: Programmatic API documentation
Getting Help
For detailed setup instructions, troubleshooting, and usage information:
- Review the Installation Guide
- Check the User Guide for workflows
- Explore the Jupyter notebook tutorials in
tutorials/ - Ensure your
.envfile contains the required API keys
Citation
If you find SpikeAgent useful for your work, please cite:
SpikeAgent: Lin, Z., Marin-Llobet, A., Baek, J., He, Y., Lee, J., Wang, W., ... & Liu, J. (2025). Spike sorting AI agent. Preprint at bioRxiv: https://doi.org/10.1101/2025.02.11.637754
SpikeInterface: Buccino, A. P., Hurwitz, C. L., Garcia, S., Magland, J., Siegle, J. H., Hurwitz, R., & Hennig, M. H. (2020). SpikeInterface, a unified framework for spike sorting. Elife, 9, e61834.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file spikeagent-1.0.0.tar.gz.
File metadata
- Download URL: spikeagent-1.0.0.tar.gz
- Upload date:
- Size: 419.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
212d9e08f6c2b45a7bd849a8b17d2e573e001f444778e45f11c1dd5b9476461a
|
|
| MD5 |
3489192f8f60d47be0fe68339d513e79
|
|
| BLAKE2b-256 |
fef2afedc25c358f619abef874c52b28c0aba7b4391f227a83e46abbbb2790d8
|
File details
Details for the file spikeagent-1.0.0-py3-none-any.whl.
File metadata
- Download URL: spikeagent-1.0.0-py3-none-any.whl
- Upload date:
- Size: 442.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d05dd647956c1ff06d6b7615d1ed1c245051b803997c94528ce98911d293110c
|
|
| MD5 |
fe3c3139f26f67066af6e3918de2cbc3
|
|
| BLAKE2b-256 |
d4a8dfaa75ad79b8c26c15a96e7acecc8843822fd1d103b8ef929c2f0a1712c8
|