Skip to main content

JAMMA: JAX-Accelerated Mixed Model Association

Project description

CI PyPI Bioconda Python 3.11+ JAX NumPy Hypothesis License: GPL-3.0

JAMMA

JAX-Accelerated Mixed Model Association — A modern Python reimplementation of GEMMA for genome-wide association studies (GWAS).

  • GEMMA-compatible: Drop-in replacement with identical CLI flags and output formats
  • Numerical equivalence: Validated against GEMMA on 85k real samples (91,613 SNPs) — 100% significance agreement, 100% effect direction agreement
  • Fast: Up to 7x faster than GEMMA on kinship and 4x faster on LMM association
  • Memory-safe: Pre-flight memory checks prevent OOM crashes before allocation
  • Pure Python: JAX + NumPy stack, no C++ compilation required

Installation

pip install jamma

Or with uv:

uv add jamma

Quick Start

# Compute kinship matrix (centered relatedness)
jamma -o output gk -bfile data/my_study -gk 1

# Run LMM association (Wald test)
jamma -o results lmm -bfile data/my_study -k output/output.cXX.txt -lmm 1

Output files match GEMMA format exactly:

  • output.cXX.txt — Kinship matrix
  • results.assoc.txt — Association results (chr, rs, ps, n_miss, allele1, allele0, af, beta, se, logl_H1, l_remle, p_wald)
  • results.log.txt — Run log

Python API

One-call GWAS (recommended)

from jamma import gwas

# Full pipeline: load data → kinship → eigendecomp → LMM → results
result = gwas("data/my_study", kinship_file="data/kinship.cXX.txt")
print(f"Tested {result.n_snps_tested} SNPs in {result.timing['total_s']:.1f}s")

# Compute kinship from scratch and save it
result = gwas("data/my_study", save_kinship=True, output_dir="output")

# With covariates and LRT test
result = gwas("data/my_study", kinship_file="k.txt", covariate_file="covars.txt", lmm_mode=2)

Low-level API

from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_streaming
from jamma.lmm.eigen import eigendecompose_kinship

# Load PLINK data
data = load_plink_binary("data/my_study")

# Compute kinship
kinship = compute_centered_kinship(data.genotypes)

# Eigendecompose for LMM
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)

# Run association (streaming from disk)
results = run_lmm_association_streaming(
    bed_path="data/my_study",
    phenotypes=phenotypes,
    kinship=kinship,
    chunk_size=5000,
)

Memory Safety

Unlike GEMMA, JAMMA includes pre-flight memory checks that prevent out-of-memory crashes:

from jamma.core.memory import estimate_workflow_memory

# Check memory requirements BEFORE loading data
estimate = estimate_workflow_memory(n_samples=200_000, n_snps=95_000)
print(f"Peak memory: {estimate.total_gb:.1f}GB")
print(f"Available: {estimate.available_gb:.1f}GB")
print(f"Sufficient: {estimate.sufficient}")

Key features:

  • Pre-flight checks before large allocations (eigendecomposition, genotype loading)
  • RSS memory logging at workflow boundaries
  • Incremental result writing (no memory accumulation)
  • Safe chunk size defaults with hard caps

GEMMA will silently OOM and get killed by the OS. JAMMA fails fast with clear error messages.

Performance

Benchmark on mouse_hs1940 (1,940 samples × 12,226 SNPs):

Operation GEMMA JAMMA Speedup
Kinship (-gk 1) 6.5s 0.9s 7.1x
LMM (-lmm 1) 19.5s 4.7s 4.2x
Total 26.0s 5.6s 4.6x

Supported Features

Current

  • Kinship matrix computation (-gk 1)
  • Univariate LMM Wald test (-lmm 1)
  • Likelihood ratio test (-lmm 2)
  • Score test (-lmm 3)
  • All tests mode (-lmm 4)
  • Pre-computed kinship input (-k)
  • Covariate support (-c)
  • PLINK binary format (.bed/.bim/.fam)
  • Streaming I/O for 200k+ samples
  • JAX acceleration (CPU/GPU)
  • Pre-flight memory checks (fail-fast before OOM)
  • RSS memory logging at workflow boundaries
  • Incremental result writing

Planned

  • Multivariate LMM (mvLMM)

Documentation

  • Why JAMMA? — Key differentiators from GEMMA
  • User Guide — Installation, usage examples, CLI reference
  • Code Map — Architecture diagrams and source navigation
  • Equivalence Proof — Mathematical proofs and empirical validation against GEMMA
  • GEMMA Divergences — Known differences from GEMMA
  • Performance — Bottleneck analysis, scale validation, configuration guide

Requirements

  • Python 3.11+
  • JAX 0.8.0+
  • NumPy 1.26+

License

GPL-3.0 (same as GEMMA)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jamma-1.5.1.tar.gz (80.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jamma-1.5.1-py3-none-any.whl (97.5 kB view details)

Uploaded Python 3

File details

Details for the file jamma-1.5.1.tar.gz.

File metadata

  • Download URL: jamma-1.5.1.tar.gz
  • Upload date:
  • Size: 80.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jamma-1.5.1.tar.gz
Algorithm Hash digest
SHA256 a57741d1c35b37074fe3fe1ed9243bce698e863babd1d06c7042d52ef103ce3c
MD5 66d89a951333c1e9a91660e1bb171d13
BLAKE2b-256 9280fd78a07833beeb32184fbbdf93ff2917f22c2879660a896b6e3b9d97ae69

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-1.5.1.tar.gz:

Publisher: publish.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-1.5.1-py3-none-any.whl.

File metadata

  • Download URL: jamma-1.5.1-py3-none-any.whl
  • Upload date:
  • Size: 97.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jamma-1.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4d7d30e6c3531228b055112882f71fe3000df37f45454d0acc2344009f7a84ab
MD5 ddf94a011854a937393871eff1921f55
BLAKE2b-256 9bef24cf3b094c17bab12186db6701979a9073e54f8c79ffcfd91ad8630bf396

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-1.5.1-py3-none-any.whl:

Publisher: publish.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page