Skip to main content

Fast Mixed Model Association for GWAS

Project description

CI PyPI Python 3.11+ JAX NumPy Hypothesis License: GPL-3.0 Buy Me a Coffee

JAMMA

Fast Mixed Model Association — A modern Python reimplementation of GEMMA for genome-wide association studies (GWAS).

  • GEMMA-compatible: Drop-in replacement with identical CLI flags and output formats
  • Numerical equivalence: Validated against GEMMA — 100% significance agreement, 100% effect direction agreement
  • Fast: Up to 11x faster than GEMMA on kinship and 6x faster on LMM association
  • Memory-safe: Pre-flight memory checks prevent OOM crashes before allocation
  • Cross-platform: Runs on Linux, macOS, and Windows — NumPy backend works everywhere, JAX backend adds GPU acceleration
  • Pure Python: NumPy + optional JAX stack, no C++ compilation required
  • Large-scale ready: Optional numpy-mkl ILP64 wheels (numpy 2.4.2) for >46k sample eigendecomposition

Installation

# Base install (NumPy backend — works on all platforms)
pip install jamma

# With JAX acceleration (Linux, ARM Mac, Windows CPU)
pip install jamma[jax]

Or with uv:

uv add jamma        # NumPy backend
uv add jamma[jax]   # With JAX acceleration

Platform Support

Platform pip install jamma pip install jamma[jax] Notes
Linux x86_64 JAX (auto-included) Full support; ILP64 for >46k samples
ARM Mac (M1+) JAX (auto-included) Full support
Intel Mac NumPy only Not available JAX dropped Intel Mac support
Windows NumPy only JAX (CPU) Explicit opt-in via [jax] extra

JAX is auto-included on Linux and ARM Mac via platform markers. Force a specific backend with --backend numpy or --backend jax.

Quick Start

# Compute kinship matrix (centered relatedness)
jamma -gk 1 -bfile data/my_study -o output

# Run LMM association (Wald test)
jamma -lmm 1 -bfile data/my_study -k output/output.cXX.txt -o results

Output files match GEMMA format exactly:

  • output.cXX.txt — Kinship matrix
  • results.assoc.txt — Association results (chr, rs, ps, n_miss, allele1, allele0, af, beta, se, logl_H1, l_remle, p_wald)
  • results.log.txt — Run log

Python API

One-call GWAS (recommended)

from jamma import gwas

# Full pipeline: load data → kinship → eigendecomp → LMM → results
result = gwas("data/my_study", kinship_file="data/kinship.cXX.txt")
print(f"Tested {result.n_snps_tested} SNPs in {result.timing['total_s']:.1f}s")

# Compute kinship from scratch and save it
result = gwas("data/my_study", save_kinship=True, output_dir="output")

# With covariates and LRT test
result = gwas("data/my_study", kinship_file="k.txt", covariate_file="covars.txt", lmm_mode=2)

# LOCO analysis (leave-one-chromosome-out)
result = gwas("data/my_study", loco=True)

# Multi-phenotype with eigendecomp reuse
result = gwas("data/my_study", write_eigen=True, phenotype_column=1)
result = gwas("data/my_study", eigenvalue_file="output/result.eigenD.txt",
              eigenvector_file="output/result.eigenU.txt", phenotype_column=2)

# SNP filtering
result = gwas("data/my_study", kinship_file="k.txt", snps_file="snps.txt", hwe=0.001)

Low-level API (JAX backend)

import numpy as np

from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_streaming
from jamma.lmm.eigen import eigendecompose_kinship

# Load PLINK data and phenotypes
data = load_plink_binary("data/my_study")
phenotypes = np.loadtxt("data/my_study.pheno")  # loaded separately from .fam or phenotype file

# Compute kinship and eigendecompose (treat kinship as consumed after this)
kinship = compute_centered_kinship(data.genotypes)
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)

# Run association (streaming from disk)
results, n_tested = run_lmm_association_streaming(
    bed_path="data/my_study",
    phenotypes=phenotypes,
    eigenvalues=eigenvalues,
    eigenvectors=eigenvectors,
    chunk_size=5000,
)

Low-level API (NumPy backend)

import numpy as np

from jamma.io import load_plink_binary
from jamma.kinship import compute_centered_kinship
from jamma.lmm import run_lmm_association_numpy
from jamma.lmm.eigen import eigendecompose_kinship

data = load_plink_binary("data/my_study")
phenotypes = np.loadtxt("data/my_study.pheno")
kinship = compute_centered_kinship(data.genotypes)
eigenvalues, eigenvectors = eigendecompose_kinship(kinship)

snp_info = [
    {"chr": str(data.chromosome[i]), "rs": data.sid[i],
     "pos": int(data.bp_position[i]), "a1": data.allele_1[i], "a0": data.allele_2[i]}
    for i in range(data.n_snps)
]

# Returns list[AssocResult] — write to disk via IncrementalAssocWriter
results = run_lmm_association_numpy(
    genotypes=data.genotypes,
    phenotypes=phenotypes,
    kinship=None,  # Not needed when eigenvalues/eigenvectors provided
    snp_info=snp_info,
    eigenvalues=eigenvalues,
    eigenvectors=eigenvectors,
    lmm_mode=1,
)

Memory Safety

Unlike GEMMA, JAMMA includes pre-flight memory checks that prevent out-of-memory crashes:

from jamma.core.memory import estimate_workflow_memory

# Check memory requirements BEFORE loading data
estimate = estimate_workflow_memory(n_samples=200_000, n_snps=95_000)
print(f"Peak memory: {estimate.total_gb:.1f}GB")
print(f"Available: {estimate.available_gb:.1f}GB")
print(f"Sufficient: {estimate.sufficient}")

Key features:

  • Pre-flight checks before large allocations (eigendecomposition, genotype loading)
  • RSS memory logging at workflow boundaries
  • Incremental result writing (no memory accumulation)
  • Safe chunk size defaults with hard caps

GEMMA will silently OOM and get killed by the OS. JAMMA fails fast with clear error messages.

Performance

Benchmark on mouse_hs1940 (1,940 samples × 12,226 SNPs), Apple M2:

Operation GEMMA JAMMA Speedup
Kinship (-gk 1) 26.5s 2.4s 11.0x
LMM (-lmm 1) 27.6s 4.5s 6.1x
Total 54.1s 6.9s 7.8x

Supported Features

Current

  • Kinship matrix computation — centered (-gk 1) and standardized (-gk 2)
  • Univariate LMM Wald test (-lmm 1)
  • Likelihood ratio test (-lmm 2)
  • Score test (-lmm 3)
  • All tests mode (-lmm 4)
  • LOCO kinship — leave-one-chromosome-out analysis (-loco)
  • Eigendecomposition reuse — multi-phenotype workflows (-d/-u/-eigen)
  • Phenotype column selection (-n)
  • SNP subset selection for association and kinship (-snps/-ksnps)
  • HWE QC filtering (-hwe)
  • Pre-computed kinship input (-k)
  • Covariate support (-c)
  • PLINK binary format (.bed/.bim/.fam) with input dimension validation
  • Large-scale streaming I/O (>100k samples via numpy-mkl ILP64 — numpy 2.4.2)
  • JAX acceleration (CPU/GPU) with automatic CPU device sharding
  • XLA profiling traces (--profile-dir) for TensorBoard/Perfetto
  • Lambda optimization bounds (-lmin/-lmax)
  • Individual weights for kinship (-widv)
  • Categorical covariates with one-hot encoding (-cat)
  • Pre-flight memory checks (fail-fast before OOM)
  • RSS memory logging at workflow boundaries
  • Incremental result writing

Planned

  • Multivariate LMM (mvLMM)

Architecture

JAMMA uses a dual-backend architecture: a JAX backend for GPU/multi-core acceleration and a NumPy backend that works everywhere with zero extra dependencies.

flowchart LR
    CLI["CLI / gwas()"] --> PIPE["PipelineRunner"]
    PIPE --> DET{"detect_backend()"}
    DET -->|"jax"| JAX["JAX Backend<br>JIT + vmap + sharding"]
    DET -->|"numpy"| NP["NumPy Backend<br>pure stdlib"]
    JAX --> RES["AssocResult"]
    NP --> RES

Both backends share the same core algorithms (likelihood.py, prepare_common.py) and produce identical results. Backend-specific files follow a naming convention: *_jax.py / *_numpy.py.

See Code Map for the full architecture diagram with source links.

Documentation

Requirements

  • Python 3.11+
  • NumPy 2.0+
  • JAX 0.8.0+ (optional, for GPU acceleration: pip install jamma[jax])

License

GPL-3.0 (same as GEMMA)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jamma-2.8.3.tar.gz (80.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jamma-2.8.3-py3-none-any.whl (181.0 kB view details)

Uploaded Python 3

File details

Details for the file jamma-2.8.3.tar.gz.

File metadata

  • Download URL: jamma-2.8.3.tar.gz
  • Upload date:
  • Size: 80.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jamma-2.8.3.tar.gz
Algorithm Hash digest
SHA256 451229480d5af6f0a0a30db10f38bd70d5d5b99be6b5a157302452030cd6cd54
MD5 a84ed280f893b94110eaa9cf5ee7575e
BLAKE2b-256 6d60e848fa94b36baa1e308ffbb46d14670adce7d6739de2e23df1c1d2377902

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.8.3.tar.gz:

Publisher: publish.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jamma-2.8.3-py3-none-any.whl.

File metadata

  • Download URL: jamma-2.8.3-py3-none-any.whl
  • Upload date:
  • Size: 181.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jamma-2.8.3-py3-none-any.whl
Algorithm Hash digest
SHA256 d006f462a98a823ea2926ab8d561415c118d50708fe2fd3efe26ddf837c62a5e
MD5 815c2c7f58396a06f3dd82e023fc004b
BLAKE2b-256 beb15ef0610e6687336831ae0a54749b61cb121fa23d55a4036391258e12bbc7

See more details on using hashes here.

Provenance

The following attestation bundles were made for jamma-2.8.3-py3-none-any.whl:

Publisher: publish.yml on michael-denyer/jamma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page