Skip to main content

High-performance Multi-method Mixed-Model Association for large-scale GWAS

Project description

CI PyPI Python 3.11+ JAX NumPy Hypothesis License: GPL-3.0 Buy Me a Coffee

JAMMA

JAMMA (High-performance Multi-method Mixed-Model Association) — a modern Python and C reimplementation of GEMMA for large-scale GWAS.

  • GEMMA-compatible: Drop-in replacement with identical CLI flags and output formats
  • Numerical equivalence: Validated against GEMMA — 100% significance agreement, 100% effect direction agreement
  • Fast: Up to 11x faster than GEMMA 0.98.5 at scale
  • Memory-safe: Pre-flight memory checks prevent OOM crashes before allocation
  • Cross-platform: Runs on Linux, macOS, and Windows — NumPy backend works everywhere, JAX adds batch acceleration on Linux and ARM Mac
  • Optimized for Intel: Best performance on Intel CPUs with MKL BLAS. Runs well on Apple Silicon (Accelerate BLAS). Other architectures (AMD, ARM Linux) work correctly but with less BLAS optimization
  • Pure Python + optional C extensions: NumPy + optional JAX stack; C extensions for DSYEVR eigendecomposition and OpenMP-parallel Wald tests, JAX for batch MLE optimization
  • Large-scale ready: Optional numpy-mkl ILP64 wheels (numpy 2.4.2) for >46k sample eigendecomposition

Installation

macOS (Intel or ARM)

pip install jamma          # NumPy backend
pip install 'jamma[jax]'   # + JAX acceleration (ARM Mac only)

That's it. macOS Accelerate BLAS handles large matrices natively.

Linux / Windows / Intel Mac

For small datasets (<46k samples), the standard install works:

pip install jamma          # NumPy backend
pip install 'jamma[jax]'   # + JAX acceleration

For large-scale GWAS (>46k samples) on x86_64 (Linux or Intel Mac), install numpy-mkl first — standard numpy uses 32-bit BLAS integers which overflow at ~46k samples. MKL is x86_64-only; ARM Mac and Windows users are limited to <46k samples. Pre-built ILP64 wheels are available for Python 3.11–3.14:

NumPy backend only:

pip install numpy \
  --extra-index-url https://michael-denyer.github.io/numpy-mkl \
  --force-reinstall --upgrade
pip install jamma --no-deps
pip install psutil loguru threadpoolctl click progressbar2 bed-reader

With JAX acceleration:

pip install numpy \
  --extra-index-url https://michael-denyer.github.io/numpy-mkl \
  --force-reinstall --upgrade
pip install 'jamma[jax]' --no-deps
pip install psutil loguru threadpoolctl click progressbar2 bed-reader \
  jax jaxlib jaxtyping

From Git (latest development version):

pip install numpy \
  --extra-index-url https://michael-denyer.github.io/numpy-mkl \
  --force-reinstall --upgrade
pip install git+https://github.com/michael-denyer/jamma.git --no-deps
pip install psutil loguru threadpoolctl click progressbar2 bed-reader

Why --no-deps? JAMMA depends on numpy>=2.0.0, so a normal pip install jamma will pull in standard numpy and overwrite the ILP64 build. --no-deps prevents this; you install the runtime dependencies manually instead.

See the User Guide for ILP64 verification steps.

Platform Support

Platform pip install jamma pip install jamma[jax] BLAS Notes
Linux x86_64 (Intel) JAX (auto-included) MKL (optimal) Best performance; ILP64 for >46k samples
Linux x86_64 (AMD) JAX (auto-included) OpenBLAS Works well; MKL also works on AMD but less optimized
ARM Mac (M1+) JAX (auto-included) Accelerate Excellent performance via Apple's BLAS
ARM Linux NumPy only JAX manual install OpenBLAS Works correctly; less BLAS optimization
Intel Mac NumPy only Not available MKL / Accelerate JAX dropped Intel Mac; ILP64 for >46k samples
Windows NumPy only Not available OpenBLAS JAX dropped Windows support

JAMMA's heavy computation (eigendecomposition, matrix multiplication, REML optimization) is BLAS-bound. Intel MKL delivers the best throughput, particularly at scale. Apple Accelerate is a close second on Apple Silicon. OpenBLAS works correctly everywhere but is less tuned for these workloads.

JAX is auto-included on Linux and ARM Mac via platform markers. Force a specific backend with --backend numpy or --backend jax.

Quick Start

# Compute kinship matrix (centered relatedness)
jamma -gk 1 -bfile data/my_study -o output
# Output: output/output.cXX.npy (binary, fast)
# Add --legacy-text for GEMMA-compatible text format

# Run LMM association (Wald test)
jamma -lmm 1 -bfile data/my_study -k output/output.cXX.npy -o results

# Multiple phenotypes (eigendecomp computed once, reused)
jamma -lmm 1 -bfile data/my_study -k output/output.cXX.npy -n "1 2 3" -o results

Output files:

  • output.cXX.npy — Kinship matrix (binary NumPy format; .cXX.txt with --legacy-text)
  • results.assoc.txt — Association results (chr, rs, ps, n_miss, allele1, allele0, af, beta, se, logl_H1, l_remle, p_wald)
  • results.log.txt — Run log

The reader auto-detects format, so existing .cXX.txt files still work as -k input.

Python API

One-call GWAS (recommended)

The gwas() function is the recommended way to run JAMMA from Python. It handles the full pipeline — data loading, kinship computation, eigendecomposition, and LMM association — in a single call. You don't need to compute a kinship matrix separately unless you want to reuse it across runs.

from jamma import gwas

# Simplest usage: computes kinship internally, no separate kinship step needed
result = gwas("data/my_study")
print(f"Tested {result.n_snps_tested} SNPs in {result.timing['total_s']:.1f}s")

# Or supply a pre-computed kinship matrix to skip recomputation
result = gwas("data/my_study", kinship_file="data/kinship.cXX.npy")

# Compute kinship from scratch and save it for reuse
result = gwas("data/my_study", save_kinship=True, output_dir="output")

# With covariates and LRT test
result = gwas("data/my_study", kinship_file="k.txt", covariate_file="covars.txt", lmm_mode=2)

# LOCO analysis (leave-one-chromosome-out)
result = gwas("data/my_study", loco=True)

# Multi-phenotype with eigendecomp reuse (Python API)
result = gwas("data/my_study", write_eigen=True, phenotype_column=1)
result = gwas("data/my_study", eigenvalue_file="output/result.eigenD.npy",
              eigenvector_file="output/result.eigenU.npy", phenotype_column=2)
# Or use the CLI for automatic multi-phenotype: jamma -lmm 1 ... -n "1 2 3"

# SNP filtering
result = gwas("data/my_study", kinship_file="k.txt", snps_file="snps.txt", hwe=0.001)

Low-level API (JAX backend)

import numpy as np

from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_streaming
from jamma.lmm.eigen import eigendecompose_kinship

# Load PLINK data and phenotypes
data = load_plink_binary("data/my_study")
phenotypes = np.loadtxt("data/my_study.pheno")  # loaded separately from .fam or phenotype file

# Compute kinship and eigendecompose (treat kinship as consumed after this)
kinship = compute_centered_kinship(data.genotypes)
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)

# Run association (streaming from disk)
results, n_tested = run_lmm_association_streaming(
    bed_path="data/my_study",
    phenotypes=phenotypes,
    eigenvalues=eigenvalues,
    eigenvectors=eigenvectors,
    chunk_size=5000,
)

Low-level API (NumPy backend)

import numpy as np

from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_numpy
from jamma.lmm.eigen import eigendecompose_kinship

data = load_plink_binary("data/my_study")
phenotypes = np.loadtxt("data/my_study.pheno")
kinship = compute_centered_kinship(data.genotypes)
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)

snp_info = [
    {"chr": str(data.chromosome[i]), "rs": data.sid[i],
     "pos": int(data.bp_position[i]), "a1": data.allele_1[i], "a0": data.allele_2[i]}
    for i in range(data.n_snps)
]

# Returns LmmRunResult — access .associations for list[AssocResult], .pve for heritability
run_result = run_lmm_association_numpy(
    genotypes=data.genotypes,
    phenotypes=phenotypes,
    kinship=None,  # Not needed when eigenvalues/eigenvectors provided
    snp_info=snp_info,
    eigenvalues=eigenvalues,
    eigenvectors=eigenvectors,
    lmm_mode=1,
)
results = run_result.associations

Memory Safety

Unlike GEMMA, JAMMA includes pre-flight memory checks that prevent out-of-memory crashes:

from jamma.core.memory import estimate_workflow_memory

# Check memory requirements BEFORE loading data
estimate = estimate_workflow_memory(n_samples=200_000, n_snps=95_000)
print(f"Peak memory: {estimate.total_gb:.1f}GB")
print(f"Available: {estimate.available_gb:.1f}GB")
print(f"Sufficient: {estimate.sufficient}")

Key features:

  • Pre-flight checks before large allocations (eigendecomposition, genotype loading)
  • RSS memory logging at workflow boundaries
  • Incremental result writing (no memory accumulation)
  • Safe chunk size defaults with hard caps

GEMMA will silently OOM and get killed by the OS. JAMMA fails fast with clear error messages.

Performance

Benchmark on mouse_hs1940 (1,940 samples × 12,226 SNPs), Apple M2 (AC power), GEMMA 0.98.5. Best-of runs, end-to-end wall clock:

Operation GEMMA 0.98.5 JAMMA NumPy JAMMA NumPy+C JAMMA JAX (batch) JAMMA JAX (streaming) C speedup vs GEMMA
Kinship (-gk 1) 2.2s 268ms 268ms 1.0x 8.3x
LMM Wald (-lmm 1) 11.2s 4.3s 1.3s 2.2s 2.5s 3.4x 8.9x
LMM All (-lmm 4) 20.7s 8.0s 5.9s 3.2s 4.2s 1.4x 6.6x
LMM Wald+4cov (-lmm 1 -c) 41.4s 12.5s 5.5s 4.2s 5.4s 2.3x 9.8x

NumPy+C uses a C extension with OpenMP for Wald (-lmm 1) — REML optimization is compute-bound and parallelizes well across SNPs. The C speedup grows with covariates (2.3x with 4 covariates) because the Pab table recursion is more expensive. JAX (batch) pulls ahead on all-tests (-lmm 4) because the additional MLE optimization per SNP benefits from jax.vmap batching. JAX (streaming) reads genotypes from disk in chunks and is the production code path for large datasets that don't fit in memory. Kinship is always pure NumPy/BLAS regardless of backend.

LOCO (Leave-One-Chromosome-Out)

Backend LOCO Wald vs GEMMA
GEMMA 0.98.5 3m44s 1.0x
JAMMA NumPy+C 9.0s 24.9x
JAMMA JAX 13.8s 16.3x

The large speedup has two sources: (1) JAMMA computes per-chromosome LOCO kinship via streaming and tests only that chromosome's SNPs, while GEMMA -loco tests all SNPs against each LOCO kinship (19× redundant work on 19 chromosomes); (2) JAMMA runs all chromosomes in a single process, avoiding 19 cold-start overheads. On this dataset, NumPy+C is faster than JAX because the JIT compilation overhead per chromosome outweighs XLA's compute benefit at 1,940 samples.

Supported Features

Current

  • Kinship matrix computation — centered (-gk 1) and standardized (-gk 2)
  • Univariate LMM Wald test (-lmm 1)
  • Likelihood ratio test (-lmm 2)
  • Score test (-lmm 3)
  • All tests mode (-lmm 4)
  • LOCO kinship — leave-one-chromosome-out analysis (-loco)
  • Binary .npy I/O — default for kinship and eigen files; --legacy-text for GEMMA text format
  • Multi-phenotype support — -n "1 2 3" with single eigendecomposition reuse
  • Eigendecomposition reuse — manual via -d/-u/-eigen, automatic in multi-phenotype mode
  • Phenotype column selection (-n)
  • SNP subset selection for association and kinship (-snps/-ksnps)
  • HWE QC filtering (-hwe)
  • Pre-computed kinship input (-k)
  • Covariate support (-c)
  • PLINK binary format (.bed/.bim/.fam) with input dimension validation
  • Large-scale streaming I/O (>100k samples via numpy-mkl ILP64 — numpy 2.4.2)
  • JAX acceleration (CPU) with automatic device sharding
  • XLA profiling traces (--profile-dir) for TensorBoard/Perfetto
  • Lambda optimization bounds (-lmin/-lmax)
  • Individual weights for kinship (-widv)
  • Categorical covariates with one-hot encoding (-cat)
  • Pre-flight memory checks (fail-fast before OOM)
  • RSS memory logging at workflow boundaries
  • Incremental result writing
  • Optional C extensions: DSYEVR eigendecomposition (O(n) workspace, enables >100k samples) and OpenMP-parallel Wald tests (auto-fallback to pure Python)

Planned

  • Multivariate LMM (mvLMM)

Architecture

JAMMA uses NumPy for data loading and kinship. Eigendecomposition defaults to DSYEVD (via numpy) but falls back to DSYEVR (C extension, O(n) workspace) under memory pressure — critical for >100k samples. At LMM it splits into a JAX backend (JIT, vmap, sharding) or a NumPy backend with an optional C extension for OpenMP-parallel Wald tests.

flowchart TD
    CLI["CLI / gwas()"] --> PIPE["PipelineRunner"]
    PIPE --> LOAD["Load PLINK + Phenotypes<br>(NumPy)"]
    LOAD --> KIN["Kinship<br>(NumPy matmul)"]
    KIN --> EIGMEM{"DSYEVD fits<br>in memory?"}
    EIGMEM -->|yes| EIGD["Eigendecomposition<br>(LAPACK DSYEVD · O(n²) workspace)"]
    EIGMEM -->|no| EIGR["Eigendecomposition<br>(LAPACK DSYEVR · O(n) workspace)"]
    EIGD --> DET{"detect_backend()"}
    EIGR --> DET
    DET -->|"jax"| JAX["JAX Streaming Runner<br>JIT + vmap + sharding"]
    DET -->|"numpy"| NP["NumPy Batch Runner"]
    NP --> CEXT{"C LMM extension<br>available?"}
    CEXT -->|yes| C["C Extension<br>OpenMP + SIMD"]
    CEXT -->|no| PY["Pure Python<br>fallback"]
    JAX --> RES["AssocResult"]
    C --> RES
    PY --> RES

Both backends share the same core algorithms (likelihood.py, prepare_common.py) and produce identical results. Backend-specific files follow a naming convention: *_jax.py / *_numpy.py.

See Code Map for the full architecture diagram with source links.

Documentation

Requirements

  • Python 3.11+
  • NumPy 2.0+
  • JAX 0.5.0+ (auto-included on Linux/ARM Mac; explicit extra on other platforms: pip install 'jamma[jax]')

License

GPL-3.0 (same as GEMMA)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jamma-3.0.0.tar.gz (83.5 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

jamma-3.0.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (437.2 kB view details)

Uploaded CPython 3.13manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

jamma-3.0.0-cp313-cp313-macosx_11_0_arm64.whl (302.5 kB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

jamma-3.0.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (437.2 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

jamma-3.0.0-cp312-cp312-macosx_11_0_arm64.whl (302.5 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

jamma-3.0.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (436.3 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

jamma-3.0.0-cp311-cp311-macosx_11_0_arm64.whl (302.6 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

File details

Details for the file jamma-3.0.0.tar.gz.

File metadata

  • Download URL: jamma-3.0.0.tar.gz
  • Upload date:
  • Size: 83.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jamma-3.0.0.tar.gz
Algorithm Hash digest
SHA256 4d5132cedd3e9f3533d4dea1fb6db9a948638de6299d74286885fe73e0323796
MD5 a34da525b6b4a9def89efccc94abc161
BLAKE2b-256 cd786e5f406494f1743d6247574c31bd32a13c4d2c55e4e25e0590735c3e1ab4

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-3.0.0.tar.gz:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-3.0.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for jamma-3.0.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 05ecb0ebf1ca5126a92bf1efafcf81b085d034eb173945316c88bd2f5a109ea0
MD5 8303883c29cac18c106a3da8427bf7e9
BLAKE2b-256 d0fb12da5f5268293efda2cf3e60240086da17b71653e86d10f7cea6d1aaa8c7

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-3.0.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-3.0.0-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for jamma-3.0.0-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 adfbd565afa3dbbd329f4a49bd1a4deb2421ad1d791eafbf4f6f2c975de3e4e8
MD5 a53452c11393b36b0144924129c86a36
BLAKE2b-256 0a5ff4777c7e373e316d3e4c4cb6bea67024b196dab604fe7b009f869c160e49

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-3.0.0-cp313-cp313-macosx_11_0_arm64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-3.0.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for jamma-3.0.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 f459d6e422bd9d601cf49670baf34405311b5eecf830344ef92d4ae11e6341b5
MD5 6d2d0f81cb7072ac5114176bdb6f8df6
BLAKE2b-256 66b5702ac1dd870b43d5178eb704acbd0322d2031f50321e32ca97810331741b

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-3.0.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-3.0.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for jamma-3.0.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 c89615800e056ae534ed563a7e45d28dec66127c7f406294db4a1b1fe6184d04
MD5 4dcdf6d79ee385e9502fe6a849304474
BLAKE2b-256 12a781b0549593e6f2116e57d88df18ddb4f196b7ee7bdf2552b265dcf84f25d

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-3.0.0-cp312-cp312-macosx_11_0_arm64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-3.0.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for jamma-3.0.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 5263cc909a45bcd2d333d4b3f92a42f9a708497f7375bbdaa8ec0c9c4b3953d1
MD5 ffb7ccf3ba8f01d0c945644267812d10
BLAKE2b-256 063fc83a2b6f13b4a95ead18da051ab3128c2e6acb6479bf8e295ce7eac168e4

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-3.0.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-3.0.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for jamma-3.0.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 cd76f621ae87bfc39085b2d04c96b758dc8299132d96ff871a47c2337d0384f0
MD5 f8c97f5f848048749c0ff71902c5856e
BLAKE2b-256 193429f818359aa6070e067cbef6c0427db4ce5310a3ab8cb5b619a05164dfa5

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-3.0.0-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: build-wheels.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page