Skip to main content

CaLab: calcium imaging analysis tools — deconvolution and data preparation

Project description

CaLab Python

Calcium imaging analysis tools — deconvolution and data preparation. Python companion package for the CaLab tools.

The calab package runs the same Rust FISTA solver used by the CaLab web apps (compiled to a native Python extension via PyO3), and provides utilities for loading data from common pipelines, interactive parameter tuning in the browser, automated deconvolution via CaDecon, and batch processing from scripts.

Installation

pip install calab

# Optional: CaImAn HDF5 and Minian Zarr loaders
pip install calab[loaders]

# Optional: headless browser for batch CaDecon runs
pip install calab[headless]
playwright install chromium

Note: Pre-built wheels include the compiled Rust solver for Linux, macOS, and Windows. No Rust toolchain is needed for installation.

Quick Start

import numpy as np
import calab

# Load your calcium traces (n_cells x n_timepoints)
traces = np.load("my_traces.npy")

# Interactive tuning: opens CaTune in the browser, returns exported params
params = calab.tune(traces, fs=30.0)

# Batch deconvolution with tuned parameters
activity = calab.run_deconvolution(
    traces, fs=30.0,
    tau_r=params["tau_rise"],
    tau_d=params["tau_decay"],
    lam=params["lambda_"],
)

Loading Data

Direct loaders (CaImAn, Minian)

# CaImAn HDF5 — reads traces and sampling rate directly
traces, meta = calab.load_caiman("caiman_results.hdf5")

# Minian Zarr — reads traces, fs must be provided
traces, meta = calab.load_minian("minian_output/", fs=30.0)

# Both return (ndarray, dict) with shape (n_cells, n_timepoints)
print(meta)
# {'source': 'caiman', 'sampling_rate_hz': 30.0, 'num_cells': 256, 'num_timepoints': 9000}

Requires optional dependencies: pip install calab[loaders]

Saving for CaTune

calab.save_for_tuning(traces, fs=30.0, path="my_recording")
# Creates my_recording.npy + my_recording_metadata.json

Interactive Tuning (CaTune Bridge)

calab.tune() starts a local HTTP server, opens CaTune in the browser with your data pre-loaded, and waits for you to export parameters:

params = calab.tune(traces, fs=30.0)
# Browser opens → tune parameters → click Export
# Returns: {'tau_rise': 0.02, 'tau_decay': 0.4, 'lambda_': 0.01, 'fs': 30.0, 'filter_enabled': False}

The bridge serves traces via http://127.0.0.1:<port> and the web app communicates back via the ?bridge= URL parameter.

Automated Deconvolution (CaDecon Bridge)

calab.decon() opens CaDecon in the browser, which runs the full deconvolution pipeline (including data-driven kernel estimation) and returns the results to Python.

Interactive mode

Opens CaDecon in your browser where you can configure settings and run manually:

result = calab.decon(traces, fs=30.0)

Autorun mode

Starts the solver automatically after loading — no manual interaction needed:

result = calab.decon(traces, fs=30.0, autorun=True, max_iterations=50)

print(result.activity.shape)    # (n_cells, n_timepoints), float32
print(result.alphas)            # per-cell amplitude scaling factors
print(result.baselines)         # per-cell baseline estimates
print(result.pves)              # per-cell proportion of variance explained
print(result.kernel_slow.shape) # estimated slow kernel waveform
print(result.metadata)          # tau values, convergence info, etc.

Headless mode (batch benchmarking)

Run CaDecon without a visible browser window. Requires pip install calab[headless] and playwright install chromium.

Single run:

result = calab.decon(traces, fs=30.0, headless=True, autorun=True, max_iterations=50)

Batch processing (reuses one browser across datasets):

from calab import HeadlessBrowser

with HeadlessBrowser() as hb:
    for traces, fs in datasets:
        result = calab.decon(traces, fs, headless=hb, autorun=True, max_iterations=50)
        results.append(result)

CaDecon configuration options

All options are keyword-only and optional — unset values use CaDecon's defaults:

result = calab.decon(
    traces, fs=30.0,
    autorun=True,              # start solver automatically
    max_iterations=50,         # solver iterations (1–200)
    convergence_tol=1e-6,      # convergence threshold
    upsample_target=300,       # target sampling rate for upsampling (Hz)
    hp_filter_enabled=True,    # high-pass filter
    lp_filter_enabled=True,    # low-pass filter
    num_subsets=4,             # number of random subsets
    target_coverage=0.8,       # subset coverage fraction (0–1]
    aspect_ratio=2.0,          # subset aspect ratio
    seed=42,                   # random seed for reproducibility
    timeout=120,               # max seconds to wait for results
)

FISTA Deconvolution

Run FISTA deconvolution directly using the Rust solver (no browser needed). This requires known kernel parameters (tau_rise, tau_decay, lambda):

# Basic: returns non-negative activity array
activity = calab.run_deconvolution(traces, fs=30.0, tau_r=0.02, tau_d=0.4, lam=0.01)

# Full: returns activity, baseline, reconvolution, iterations, converged
result = calab.run_deconvolution_full(traces, fs=30.0, tau_r=0.02, tau_d=0.4, lam=0.01)
print(f"Baseline: {result.baseline}, Converged: {result.converged}")

Note: The deconvolved output represents scaled neural activity, not discrete spikes or firing rates. The signal is scaled by an unknown constant (indicator expression level, optical path, etc.), so absolute values should not be interpreted as spike counts.

Solver Primitives

Low-level access to the individual stages of the deconvolution pipeline. These are the same Rust functions used internally by CaDecon:

# Single-trace solve with upsampling and filtering
result = calab.solve_trace(trace, tau_rise=0.02, tau_decay=0.4, fs=30.0,
                           upsample_factor=10, hp_enabled=True)
# Returns: SolveTraceResult(s_counts, alpha, baseline, threshold, pve, iterations, converged)

# Estimate a free-form kernel from traces and spike trains
kernel = calab.estimate_kernel(traces_flat, spikes_flat, trace_lengths,
                               alphas, baselines, kernel_length=60)

# Fit a bi-exponential model to the estimated kernel
fit = calab.fit_biexponential(kernel, fs=30.0)
# Returns: BiexpFitResult(tau_rise, tau_decay, beta, residual, tau_rise_fast, tau_decay_fast, beta_fast)

# Compute upsampling factor for a target rate
factor = calab.compute_upsample_factor(fs=30.0, target_fs=300.0)  # → 10

Bandpass Filter

Apply the same FFT bandpass filter used in the CaLab web apps:

filtered = calab.bandpass_filter(trace, tau_rise=0.02, tau_decay=0.4, fs=100.0)

Using CaTune Export JSON

Load parameters from a CaTune export and run deconvolution:

params = calab.load_export_params("catune-params-2025-01-15.json")
# {'tau_rise': 0.02, 'tau_decay': 0.4, 'lambda_': 0.01, 'fs': 30.0, 'filter_enabled': False}

# One-step pipeline: loads params, optionally filters, deconvolves
activity = calab.deconvolve_from_export(traces, "catune-params-2025-01-15.json")

Kernel Math

kernel = calab.build_kernel(tau_rise=0.02, tau_decay=0.4, fs=30.0)
g1, g2, d, r = calab.tau_to_ar2(tau_rise=0.02, tau_decay=0.4, fs=30.0)
L = calab.compute_lipschitz(kernel)

CLI

The calab command-line tool is installed with the package:

# Interactive tuning
calab tune my_traces.npy --fs 30.0

# Batch deconvolution with exported params
calab deconvolve my_traces.npy --params catune-params.json -o activity.npy

# Convert from CaImAn/Minian to CaLab format
calab convert caiman_results.hdf5 --format caiman -o my_recording

# Show file info
calab info my_traces.npy

API Reference

Bridge

Function / Class Description
tune(traces, fs, ...) Open CaTune in browser for interactive tuning
decon(traces, fs, ...) Open CaDecon for automated deconvolution
HeadlessBrowser() Context manager for headless browser sessions
DeconConfig Pydantic model for CaDecon configuration

Compute

Function Description
run_deconvolution(traces, fs, tau_r, tau_d, lam) FISTA deconvolution, returns activity
run_deconvolution_full(traces, fs, tau_r, tau_d, lam) Full result with baseline, reconvolution
solve_trace(trace, tau_rise, tau_decay, fs, ...) Single-trace solve (InDeCa pipeline)
estimate_kernel(traces_flat, spikes_flat, ...) Free-form kernel estimation
fit_biexponential(h_free, fs, ...) Bi-exponential kernel fit
compute_upsample_factor(fs, target_fs) Upsample factor for target rate
build_kernel(tau_rise, tau_decay, fs) Double-exponential calcium kernel
tau_to_ar2(tau_rise, tau_decay, fs) AR(2) coefficients from tau values
compute_lipschitz(kernel) Lipschitz constant for FISTA step size
bandpass_filter(trace, tau_rise, tau_decay, fs) FFT bandpass filter from kernel params

I/O

Function Description
save_for_tuning(traces, fs, path) Save traces for CaTune browser tool
load_tuning_data(path) Load traces saved by save_for_tuning
load_export_params(path) Load params from CaTune export JSON
deconvolve_from_export(traces, params_path) Full pipeline: load params + deconvolve
load_caiman(path, ...) Load traces from CaImAn HDF5 file
load_minian(path, ...) Load traces from Minian Zarr directory

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

calab-0.2.3.tar.gz (289.9 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

calab-0.2.3-cp312-cp312-win_amd64.whl (759.5 kB view details)

Uploaded CPython 3.12Windows x86-64

calab-0.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (820.6 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

calab-0.2.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (590.9 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ ARM64

calab-0.2.3-cp312-cp312-macosx_11_0_arm64.whl (567.3 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

calab-0.2.3-cp312-cp312-macosx_10_12_x86_64.whl (741.3 kB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

File details

Details for the file calab-0.2.3.tar.gz.

File metadata

  • Download URL: calab-0.2.3.tar.gz
  • Upload date:
  • Size: 289.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for calab-0.2.3.tar.gz
Algorithm Hash digest
SHA256 f494b8145218378c3c15886299281c74719f4bd0b3b85d62bda6f206feb363ae
MD5 7bcf21f6faec333931ffaa8124dd24d5
BLAKE2b-256 b896fae986690ec87fa289d379380f9b0149dc32f09b4ed8891a8d9cf9bf7089

See more details on using hashes here.

Provenance

The following attestation bundles were made for calab-0.2.3.tar.gz:

Publisher: publish-python.yml on miniscope/CaLab

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file calab-0.2.3-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: calab-0.2.3-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 759.5 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for calab-0.2.3-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 058eb52bc3b29d67c037d39c58bd96b25ccec9b87c684eceff951154af86f1f1
MD5 87864fe707001c45bd41e732df861ad5
BLAKE2b-256 6df615d7f3cf8a3c9b67bf6866e0167d009154a6489b42f9fe98ae34b82f8b32

See more details on using hashes here.

Provenance

The following attestation bundles were made for calab-0.2.3-cp312-cp312-win_amd64.whl:

Publisher: publish-python.yml on miniscope/CaLab

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file calab-0.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for calab-0.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d690f1909d1cd54ba6cdd1469a5b75531582dc065d4e1659c60a9a6f36afc694
MD5 99f9dd9b2e8404180cb5851700ca62f6
BLAKE2b-256 6ae86020dd749a7ac40e1d8ab43e5377876fb07d1606868bd27504a649a66ffe

See more details on using hashes here.

Provenance

The following attestation bundles were made for calab-0.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: publish-python.yml on miniscope/CaLab

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file calab-0.2.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for calab-0.2.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 5b75f232b61a7d18e403c7e0713a8851e7b3320e57a8c6e002cfb9a22c29b045
MD5 e00088427ed1886ed025a60c19b04450
BLAKE2b-256 d95e3f468e256ad4b3a2bb75fb4ddb32f74f468ef475f11f8c226f1fc66e0d4f

See more details on using hashes here.

Provenance

The following attestation bundles were made for calab-0.2.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl:

Publisher: publish-python.yml on miniscope/CaLab

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file calab-0.2.3-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for calab-0.2.3-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fa3088390bfd428e256f8076df6f8f1259f50e8ac9e3dbc9796949a0dad1f6e3
MD5 92cd31989766bd541b25cb49b71a5898
BLAKE2b-256 05e42ec79135d8838607e247279e9708e29f5ce322135d20101544fb6d9c6670

See more details on using hashes here.

Provenance

The following attestation bundles were made for calab-0.2.3-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: publish-python.yml on miniscope/CaLab

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file calab-0.2.3-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for calab-0.2.3-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 b1f9d0449944d46b12068e0a57b25b1ca24f72ded1d5099a7914f6306d7a4006
MD5 bdca79924068051a73eea51b40c87040
BLAKE2b-256 d3d38fbe910b618457082c84dd951a1b412ff9381dd79e1cc55a092eebe32a1b

See more details on using hashes here.

Provenance

The following attestation bundles were made for calab-0.2.3-cp312-cp312-macosx_10_12_x86_64.whl:

Publisher: publish-python.yml on miniscope/CaLab

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page