Skip to main content

High-performance Multi-method Mixed-Model Association for large-scale GWAS

Project description

CI PyPI Python 3.11+ JAX NumPy Hypothesis License: GPL-3.0 Buy Me a Coffee

JAMMA

JAMMA (High-performance Multi-method Mixed-Model Association) — a modern Python and C reimplementation of GEMMA for large-scale GWAS.

  • GEMMA-compatible: Drop-in replacement with identical CLI flags and output formats
  • Numerical equivalence: Validated against GEMMA — 100% significance agreement, 100% effect direction agreement
  • Fast: Up to 11x faster than GEMMA 0.98.5 at scale
  • Memory-safe: Pre-flight memory checks prevent OOM crashes before allocation
  • Cross-platform: Runs on Linux, macOS, and Windows — NumPy backend works everywhere, JAX adds batch acceleration on Linux and ARM Mac
  • Optimized for Intel: Best performance on Intel CPUs with MKL BLAS. Runs well on Apple Silicon (Accelerate BLAS). Other architectures (AMD, ARM Linux) work correctly but with less BLAS optimization
  • Pure Python + optional C extensions: NumPy + optional JAX stack; C extensions for DSYEVR eigendecomposition and OpenMP-parallel Wald tests, JAX for batch MLE optimization
  • Large-scale ready: Optional numpy-mkl ILP64 wheels (numpy 2.4.2) for >46k sample eigendecomposition

Installation

macOS (Intel or ARM)

pip install jamma          # NumPy backend
pip install 'jamma[jax]'   # + JAX acceleration (ARM Mac only)

That's it. macOS Accelerate BLAS handles large matrices natively.

Linux / Windows / Intel Mac

For small datasets (<46k samples), the standard install works:

pip install jamma          # NumPy backend
pip install 'jamma[jax]'   # + JAX acceleration

For large-scale GWAS (>46k samples) on x86_64 (Linux or Intel Mac), install numpy-mkl first — standard numpy uses 32-bit BLAS integers which overflow at ~46k samples. MKL is x86_64-only; ARM Mac and Windows users are limited to <46k samples. Pre-built ILP64 wheels are available for Python 3.11–3.14:

NumPy backend only:

pip install numpy \
  --extra-index-url https://michael-denyer.github.io/numpy-mkl \
  --force-reinstall --upgrade
pip install jamma --no-deps
pip install psutil loguru threadpoolctl click progressbar2 bed-reader

With JAX acceleration:

pip install numpy \
  --extra-index-url https://michael-denyer.github.io/numpy-mkl \
  --force-reinstall --upgrade
pip install 'jamma[jax]' --no-deps
pip install psutil loguru threadpoolctl click progressbar2 bed-reader \
  jax jaxlib jaxtyping

From Git (latest development version):

pip install numpy \
  --extra-index-url https://michael-denyer.github.io/numpy-mkl \
  --force-reinstall --upgrade
pip install git+https://github.com/michael-denyer/jamma.git --no-deps
pip install psutil loguru threadpoolctl click progressbar2 bed-reader

Why --no-deps? JAMMA depends on numpy>=2.0.0, so a normal pip install jamma will pull in standard numpy and overwrite the ILP64 build. --no-deps prevents this; you install the runtime dependencies manually instead.

See the User Guide for ILP64 verification steps.

Platform Support

Platform pip install jamma pip install jamma[jax] BLAS Notes
Linux x86_64 (Intel) JAX (auto-included) MKL (optimal) Best performance; ILP64 for >46k samples
Linux x86_64 (AMD) JAX (auto-included) OpenBLAS Works well; MKL also works on AMD but less optimized
ARM Mac (M1+) JAX (auto-included) Accelerate Excellent performance via Apple's BLAS
ARM Linux NumPy only JAX manual install OpenBLAS Works correctly; less BLAS optimization
Intel Mac NumPy only Not available MKL / Accelerate JAX dropped Intel Mac; ILP64 for >46k samples
Windows NumPy only Not available OpenBLAS JAX dropped Windows support

JAMMA's heavy computation (eigendecomposition, matrix multiplication, REML optimization) is BLAS-bound. Intel MKL delivers the best throughput, particularly at scale. Apple Accelerate is a close second on Apple Silicon. OpenBLAS works correctly everywhere but is less tuned for these workloads.

JAX is auto-included on Linux and ARM Mac via platform markers. Force a specific backend with --backend numpy or --backend jax.

Quick Start

# Compute kinship matrix (centered relatedness)
jamma -gk 1 -bfile data/my_study -o output
# Output: output/output.cXX.npy (binary, fast)
# Add --legacy-text for GEMMA-compatible text format

# Run LMM association (Wald test)
jamma -lmm 1 -bfile data/my_study -k output/output.cXX.npy -o results

# Multiple phenotypes (eigendecomp computed once, reused)
jamma -lmm 1 -bfile data/my_study -k output/output.cXX.npy -n "1 2 3" -o results

Output files:

  • output.cXX.npy — Kinship matrix (binary NumPy format; .cXX.txt with --legacy-text)
  • results.assoc.txt — Association results (chr, rs, ps, n_miss, allele1, allele0, af, beta, se, logl_H1, l_remle, p_wald)
  • results.log.txt — Run log

The reader auto-detects format, so existing .cXX.txt files still work as -k input.

Python API

One-call GWAS (recommended)

The gwas() function is the recommended way to run JAMMA from Python. It handles the full pipeline — data loading, kinship computation, eigendecomposition, and LMM association — in a single call. You don't need to compute a kinship matrix separately unless you want to reuse it across runs.

from jamma import gwas

# Simplest usage: computes kinship internally, no separate kinship step needed
result = gwas("data/my_study")
print(f"Tested {result.n_snps_tested} SNPs in {result.timing['total_s']:.1f}s")

# Or supply a pre-computed kinship matrix to skip recomputation
result = gwas("data/my_study", kinship_file="data/kinship.cXX.npy")

# Compute kinship from scratch and save it for reuse
result = gwas("data/my_study", save_kinship=True, output_dir="output")

# With covariates and LRT test
result = gwas("data/my_study", kinship_file="k.txt", covariate_file="covars.txt", lmm_mode=2)

# LOCO analysis (leave-one-chromosome-out)
result = gwas("data/my_study", loco=True)

# Multi-phenotype with eigendecomp reuse (Python API)
result = gwas("data/my_study", write_eigen=True, phenotype_column=1)
result = gwas("data/my_study", eigenvalue_file="output/result.eigenD.npy",
              eigenvector_file="output/result.eigenU.npy", phenotype_column=2)
# Or use the CLI for automatic multi-phenotype: jamma -lmm 1 ... -n "1 2 3"

# SNP filtering
result = gwas("data/my_study", kinship_file="k.txt", snps_file="snps.txt", hwe=0.001)

Low-level API (JAX backend)

import numpy as np

from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_streaming
from jamma.lmm.eigen import eigendecompose_kinship

# Load PLINK data and phenotypes
data = load_plink_binary("data/my_study")
phenotypes = np.loadtxt("data/my_study.pheno")  # loaded separately from .fam or phenotype file

# Compute kinship and eigendecompose (treat kinship as consumed after this)
kinship = compute_centered_kinship(data.genotypes)
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)

# Run association (streaming from disk)
results, n_tested = run_lmm_association_streaming(
    bed_path="data/my_study",
    phenotypes=phenotypes,
    eigenvalues=eigenvalues,
    eigenvectors=eigenvectors,
    chunk_size=5000,
)

Low-level API (NumPy backend)

import numpy as np

from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_numpy
from jamma.lmm.eigen import eigendecompose_kinship

data = load_plink_binary("data/my_study")
phenotypes = np.loadtxt("data/my_study.pheno")
kinship = compute_centered_kinship(data.genotypes)
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)

snp_info = [
    {"chr": str(data.chromosome[i]), "rs": data.sid[i],
     "pos": int(data.bp_position[i]), "a1": data.allele_1[i], "a0": data.allele_2[i]}
    for i in range(data.n_snps)
]

# Returns list[AssocResult] — write to disk via IncrementalAssocWriter
results = run_lmm_association_numpy(
    genotypes=data.genotypes,
    phenotypes=phenotypes,
    kinship=None,  # Not needed when eigenvalues/eigenvectors provided
    snp_info=snp_info,
    eigenvalues=eigenvalues,
    eigenvectors=eigenvectors,
    lmm_mode=1,
)

Memory Safety

Unlike GEMMA, JAMMA includes pre-flight memory checks that prevent out-of-memory crashes:

from jamma.core.memory import estimate_workflow_memory

# Check memory requirements BEFORE loading data
estimate = estimate_workflow_memory(n_samples=200_000, n_snps=95_000)
print(f"Peak memory: {estimate.total_gb:.1f}GB")
print(f"Available: {estimate.available_gb:.1f}GB")
print(f"Sufficient: {estimate.sufficient}")

Key features:

  • Pre-flight checks before large allocations (eigendecomposition, genotype loading)
  • RSS memory logging at workflow boundaries
  • Incremental result writing (no memory accumulation)
  • Safe chunk size defaults with hard caps

GEMMA will silently OOM and get killed by the OS. JAMMA fails fast with clear error messages.

Performance

Benchmark on mouse_hs1940 (1,940 samples × 12,226 SNPs), Apple M2 (AC power), GEMMA 0.98.5. Best of multiple runs, end-to-end wall clock:

Operation GEMMA 0.98.5 JAMMA NumPy+C JAMMA JAX (batch) JAMMA JAX (streaming) vs GEMMA
Kinship (-gk 1) 2.1s 259ms 259ms 8.1x
LMM Wald (-lmm 1) 11.1s 1.0s 2.0s 2.7s 11.1x
LMM All (-lmm 4) 20.6s 5.1s 2.8s 4.3s 7.3x

NumPy+C uses a C extension with OpenMP for Wald-only (-lmm 1) — REML optimization is compute-bound and parallelizes well across SNPs. JAX (batch) pulls ahead on all-tests (-lmm 4) because the additional MLE optimization per SNP benefits from jax.vmap batching. JAX (streaming) reads genotypes from disk in chunks and is the production code path for large datasets that don't fit in memory.

Supported Features

Current

  • Kinship matrix computation — centered (-gk 1) and standardized (-gk 2)
  • Univariate LMM Wald test (-lmm 1)
  • Likelihood ratio test (-lmm 2)
  • Score test (-lmm 3)
  • All tests mode (-lmm 4)
  • LOCO kinship — leave-one-chromosome-out analysis (-loco)
  • Binary .npy I/O — default for kinship and eigen files; --legacy-text for GEMMA text format
  • Multi-phenotype support — -n "1 2 3" with single eigendecomposition reuse
  • Eigendecomposition reuse — manual via -d/-u/-eigen, automatic in multi-phenotype mode
  • Phenotype column selection (-n)
  • SNP subset selection for association and kinship (-snps/-ksnps)
  • HWE QC filtering (-hwe)
  • Pre-computed kinship input (-k)
  • Covariate support (-c)
  • PLINK binary format (.bed/.bim/.fam) with input dimension validation
  • Large-scale streaming I/O (>100k samples via numpy-mkl ILP64 — numpy 2.4.2)
  • JAX acceleration (CPU) with automatic device sharding
  • XLA profiling traces (--profile-dir) for TensorBoard/Perfetto
  • Lambda optimization bounds (-lmin/-lmax)
  • Individual weights for kinship (-widv)
  • Categorical covariates with one-hot encoding (-cat)
  • Pre-flight memory checks (fail-fast before OOM)
  • RSS memory logging at workflow boundaries
  • Incremental result writing
  • Optional C extensions: DSYEVR eigendecomposition (O(n) workspace, enables >100k samples) and OpenMP-parallel Wald tests (auto-fallback to pure Python)

Planned

  • Multivariate LMM (mvLMM)

Architecture

JAMMA uses NumPy for data loading and kinship. Eigendecomposition defaults to DSYEVD (via numpy) but falls back to DSYEVR (C extension, O(n) workspace) under memory pressure — critical for >100k samples. At LMM it splits into a JAX backend (JIT, vmap, sharding) or a NumPy backend with an optional C extension for OpenMP-parallel Wald tests.

flowchart TD
    CLI["CLI / gwas()"] --> PIPE["PipelineRunner"]
    PIPE --> LOAD["Load PLINK + Phenotypes<br>(NumPy)"]
    LOAD --> KIN["Kinship<br>(NumPy matmul)"]
    KIN --> EIGMEM{"DSYEVD fits<br>in memory?"}
    EIGMEM -->|yes| EIGD["Eigendecomposition<br>(LAPACK DSYEVD · O(n²) workspace)"]
    EIGMEM -->|no| EIGR["Eigendecomposition<br>(LAPACK DSYEVR · O(n) workspace)"]
    EIGD --> DET{"detect_backend()"}
    EIGR --> DET
    DET -->|"jax"| JAX["JAX Streaming Runner<br>JIT + vmap + sharding"]
    DET -->|"numpy"| NP["NumPy Batch Runner"]
    NP --> CEXT{"C LMM extension<br>available?"}
    CEXT -->|yes| C["C Extension<br>OpenMP + SIMD"]
    CEXT -->|no| PY["Pure Python<br>fallback"]
    JAX --> RES["AssocResult"]
    C --> RES
    PY --> RES

Both backends share the same core algorithms (likelihood.py, prepare_common.py) and produce identical results. Backend-specific files follow a naming convention: *_jax.py / *_numpy.py.

See Code Map for the full architecture diagram with source links.

Documentation

Requirements

  • Python 3.11+
  • NumPy 2.0+
  • JAX 0.5.0+ (auto-included on Linux/ARM Mac; explicit extra on other platforms: pip install 'jamma[jax]')

License

GPL-3.0 (same as GEMMA)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jamma-2.12.0.tar.gz (83.4 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

jamma-2.12.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (410.1 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

jamma-2.12.0-cp313-cp313-macosx_11_0_arm64.whl (283.7 kB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

jamma-2.12.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (410.2 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

jamma-2.12.0-cp312-cp312-macosx_11_0_arm64.whl (283.7 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

jamma-2.12.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (409.4 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

jamma-2.12.0-cp311-cp311-macosx_11_0_arm64.whl (283.8 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

File details

Details for the file jamma-2.12.0.tar.gz.

File metadata

  • Download URL: jamma-2.12.0.tar.gz
  • Upload date:
  • Size: 83.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jamma-2.12.0.tar.gz
Algorithm Hash digest
SHA256 f28b37472f1467edf0cfed764041a6547406bf8adaba49ed12d2f1468b2f9200
MD5 a81a88efc97ed73ba09e3367fc0b06a3
BLAKE2b-256 56987fe208e58399bca0bd1e20ebad4415f3b8da1172bf4ead7f87abc8b5ab91

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.12.0.tar.gz:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-2.12.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for jamma-2.12.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 eb7ec4a1bf244f854fe71fa7764b51755b0776c23214558ca37a087e3d05aa28
MD5 56eca371fd284f4f6c588028c7518634
BLAKE2b-256 e7041d0e5c1a02c3f76b48f202ad9bacd958733a4b830848b0974b2f98dce9fd

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.12.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-2.12.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for jamma-2.12.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 788b99fdd41052b973be7d307016b12e28ced911bacbaaf3dcbf893643b9f23d
MD5 f37d6b0e779cbe444882c6f7a1d0ccbd
BLAKE2b-256 dc73471940073c01a3de488b875cc1227bbdf97698d7767ec1e1177ec6d94f83

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.12.0-cp313-cp313-macosx_11_0_arm64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-2.12.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for jamma-2.12.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 06c46334a504367d39d48046cbdc2d989c4ff0d0a766cf831f0922fdb595de2b
MD5 98586b64acb92796cc53b0c4a031f15a
BLAKE2b-256 018d951f197dcff0abed6b0d3373a7f25e694e8a3a859c3b95e9ef6301e3d4bd

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.12.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-2.12.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for jamma-2.12.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8d51792fcc4eaeef0007d272188482819bf0d9042ea306e56e2dbe9bcaf3b2d1
MD5 c70cec37be3d570938a3df4fecf546c6
BLAKE2b-256 16de62f5d8a12db5d6b8cd0ba45a0f9f11cf62c1db69b829370afb7b9bf92c78

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.12.0-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-2.12.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for jamma-2.12.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 467b2ea06f483a98255d5bb24df530c751cb312d64e1f5cb4474d876effc978c
MD5 9e165f393c1f4a726a63bd9fdbb424c6
BLAKE2b-256 471321b83b835871e252e27ef6cbd44839c573941301b9578b1fd2f8cb809513

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.12.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-2.12.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for jamma-2.12.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 1f7c3cd9b6992bb6ed4af75542a532ce045c462e2ee46848b3d77f4c4303ea16
MD5 40e65fc51c3c21875b274568bb3617d1
BLAKE2b-256 1af38f30327fa8a0c9df6e394575e3fda86c16e6385e7d53928ceb7b49e1bfb7

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.12.0-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page