High-Performance Parallel Differential Expression Analysis for Single-Cell Data
Project description
hpdex
High‑performance differential expression analysis for single‑cell data
Overview · Installation · Quick Start · API · Kernels · Testing · FAQ · License
🔎 Overview
hpdex provides efficient differential expression (DE) analysis for single‑cell data using multiprocessing and an optimized Mann–Whitney U implementation. It aims to be statistically consistent with scipy.stats.mannwhitneyu while scaling to large datasets.
| ⚡️ Fast Batch processing & shared memory minimize copies. |
🧠 Accurate Tie‑aware U statistics; normal approximation for large n. |
🧰 Versatile Float & histogram kernels auto‑selected by data type. |
| 🧵 Parallel Simple num_workers control. |
💾 Memory‑savvy Reuses pre‑sorted references across comparisons. |
📊 Streaming Handles datasets larger than RAM via chunking. |
⚙️ Installation
Quick Install (coming soon)
pip install hpdex
uv (recommended)
git clone https://github.com/AI4Cell/hpdex.git
cd hpdex
uv sync
pip (from source)
git clone https://github.com/AI4Cell/hpdex.git
cd hpdex
pip install -e .
Requirements
- Python ≥ 3.10
numpy,scipy,numba,pandas,anndata
🚀 Quick Start
import anndata as ad
from hpdex import parallel_differential_expression
# Load your data
adata = ad.read_h5ad("data.h5ad")
# Run differential expression analysis
results = parallel_differential_expression(
adata,
groupby_key="perturbation",
reference="control",
num_workers=4,
)
# Save results
results.to_csv("de_results.csv", index=False)
Output schema (DataFrame columns):
| column | description |
|---|---|
target |
target group name |
feature |
gene / feature id |
p_value |
(two‑sided) p‑value from Mann–Whitney U |
fold_change |
mean(target) / mean(reference) |
log2_fold_change |
log2(fold_change) |
fdr |
BH‑adjusted p‑value (Benjamini–Hochberg) |
📚 API Reference
parallel_differential_expression
Main entry for DE analysis.
parallel_differential_expression(
adata: ad.AnnData,
groupby_key: str,
reference: str,
groups: Optional[List[str]] = None,
metric: str = "wilcoxon",
tie_correction: bool = True,
continuity_correction: bool = True,
use_asymptotic: Optional[bool] = None,
min_samples: int = 2,
max_bins: int = 100_000,
prefer_hist_if_int: bool = False,
num_workers: int = 1,
clip_value: float = 20.0,
) -> pd.DataFrame
Parameters
adata—AnnDataobject containing expression matrix & metadatagroupby_key— column inadata.obsfor groupingreference— reference group name (e.g., "control")groups— optional subset of target groups (auto ifNone)metric— currently"wilcoxon"(Mann–Whitney U)tie_correction— whether to apply tie correctioncontinuity_correction— whether to apply continuity correctionuse_asymptotic— whether to use asymptotic approximationmin_samples— minimum number of samples per groupmax_bins— maximum number of bins for histogram algorithmprefer_hist_if_int— prefer histogram algorithm for integer datanum_workers— number of worker processesclip_value— clip fold change if infinite or NaN
💡 Tips
- For UMI counts, set
prefer_hist_if_int=Trueto favor the histogram kernel.- Very large samples may produce extremely small
p_values due to underflow; rely onfdrfor decisions.
Returns — pd.DataFrame (see Output schema above)
🧪 Statistical Kernels
hpdex implements two complementary kernels and auto‑selects by data type.
Float Kernel
- Use: continuous expression (e.g., log‑counts)
- Alg: merge‑rank computation for U; Numba JIT; vectorized batches
- Mem:
O(n)working memory for sorted arrays
Histogram Kernel
- Use: integer/discrete counts (e.g., UMI)
- Alg: bucketized rank computation; reduces sorting overhead
- Mem:
O(bins)working memory, typically ≪ data size
Common features
- Proper tie handling and variance correction
- Asymptotic normal approximation for large samples
- Batching across gene × group pairs
- Reference re‑use to save sorting cost
The kernels aim to match
scipy.stats.mannwhitneyunumerically under equivalent settings.
🧷 Testing
See test/README.md for full docs.
Quick test
cd test
python test.py config_quick.yml
Full suite
python test.py config.yml
❓ FAQ
Does hpdex correct for multiple testing?
Yes. The returnedfdr column applies Benjamini–Hochberg (BH) adjustment to the raw p_values.
Why do I see extremely small p_values (close to 0)?
For very large samples and strong effects, underflow can make values effectively 0.0 in float precision. This is expected; rely on fdr for decision making.
When should I prefer the Histogram kernel?
When the data are integer UMI counts with limited range. It avoids full sorting per target and is usually faster and more memory‑efficient.📄 License
MIT License — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hpdex-0.2.0.tar.gz.
File metadata
- Download URL: hpdex-0.2.0.tar.gz
- Upload date:
- Size: 21.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0162cb710e559d210beae96f22e8dec525bf08120dac242b67de98c896b8f487
|
|
| MD5 |
5ae20131cf5eb4365ce4c0a63030b1f6
|
|
| BLAKE2b-256 |
90a2ed116cd1ae530f2aaf7a5108fd13933bd1f50f71e17afa530e25c3af9e21
|
File details
Details for the file hpdex-0.2.0-py3-none-any.whl.
File metadata
- Download URL: hpdex-0.2.0-py3-none-any.whl
- Upload date:
- Size: 24.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
19cf87692eef05b1c980346e1b5df0775c798b127fcc389fe6c89795429a325e
|
|
| MD5 |
e7df00710c83bd8bb268418bfc162a28
|
|
| BLAKE2b-256 |
0a1caaea2bb4a0acd316398ddf12126ab0ef3c0512de5910365bb5613de43bf1
|