Skip to main content

A Standardized Framework for Evaluating Gene Expression Generative Models. Accepted at Gen2 Workshop @ ICLR 2026. Provides explicit computation space options (raw/pca/deg), perturbation-effect correlation, and standardized reporting for reproducible benchmarking.

Project description

GGE: A Standardized Framework for Evaluating Gene Expression Generative Models

PyPI version Python 3.8+ License: MIT Tests Documentation

Paper: Accepted at the Gen2 Workshop at ICLR 2026

GGE framework

Comprehensive, standardized evaluation of generated gene expression data.

GGE (Generated Genetic Expression Evaluator) addresses the urgent need for standardized evaluation in single-cell gene expression generative models. Current practices suffer from inconsistent metric implementations, incomparable hyperparameter choices, and lack of biologically-grounded metrics. GGE provides:

  • Comprehensive suite of distributional metrics with explicit computation space options
  • Biologically-motivated evaluation through DEG-focused analysis with perturbation-effect correlation
  • Standardized reporting for reproducible benchmarking

Full Documentation with API tutorials can be found here

Key Features

  • Per-metric space configuration (raw, PCA, DEG)
  • Perturbation-effect correlation (Paper Eq. 1)
  • Configurable DEG thresholds
  • GPU (CUDA) and Apple MPS acceleration
  • Per-gene and aggregate metrics
  • Publication-quality visualizations (static and interactive)
  • Simple Python API and CLI
  • Mixed-space evaluation with evaluate_lazy()

Metrics

All metrics are computed per-gene (returning a vector) and aggregated:

Metric Description Direction
Pearson Correlation Linear correlation between expression profiles Higher is better
Spearman Correlation Rank correlation (robust to outliers) Higher is better
Coefficient of determination Higher is better
Perturbation-Effect Correlation Correlation on (real - ctrl) vs (gen - ctrl) Higher is better
MSE Mean Squared Error Lower is better
Wasserstein-1 Earth Mover's Distance (L1) Lower is better
Wasserstein-2 Sinkhorn-regularized OT Lower is better
MMD Maximum Mean Discrepancy (RBF kernel) Lower is better
Energy Distance Statistical potential energy Lower is better

Visualizations

  • Boxplots and violin plots for metric distributions
  • Radar plots for multi-metric comparison
  • Scatter plots for real vs generated expression
  • Embedding plots (PCA/UMAP) for real vs generated data
  • Heatmaps for per-gene metric values
  • Interactive Plotly plots with density overlays and metadata coloring

Computation Spaces

GGE treats computation space as a first-class parameter (see Paper Section 3.3):

Space Description When to Use
Raw Gene Space Full ~5,000–20,000 gene dimensions Gene-level interpretability needed
PCA Space Reduced k-dimensional space (default: 50) Primary distributional metrics
DEG Space Restricted to differentially expressed genes Biologically-targeted evaluation

Recommendation: Use multi-space evaluation—PCA-50 for distributional metrics, DEG for biological focus.

Installation

pip install gge-eval

The package includes GPU-accelerated metrics via geomloss, which automatically falls back to CPU if no GPU is available.

Quick Start

Python API

from gge import evaluate

# From file paths
results = evaluate(
    real_data="real_data.h5ad",
    generated_data="generated_data.h5ad",
    condition_columns=["perturbation", "cell_type"],
    split_column="split",  # Optional: for train/test
    output_dir="evaluation_output/"
)

# From AnnData objects
import scanpy as sc
real_adata = sc.read_h5ad("real_data.h5ad")
generated_adata = sc.read_h5ad("generated_data.h5ad")

results = evaluate(
    real_data=real_adata,
    generated_data=generated_adata,
    condition_columns=["perturbation"],
)

# Mixed (path + AnnData)
results = evaluate(
    real_data="real_data.h5ad",
    generated_data=generated_adata,
    condition_columns=["perturbation"],
)

# Access results
print(results.summary())

# Get metric for specific split
test_results = results.get_split("test")
for condition, cond_result in test_results.conditions.items():
    print(f"{condition}: Pearson={cond_result.get_metric_value('pearson'):.3f}")

Command Line

# Basic usage
gge --real real.h5ad --generated generated.h5ad \
    --conditions perturbation cell_type \
    --output results/

# With split column
gge --real real.h5ad --generated generated.h5ad \
    --conditions perturbation \
    --split-column split \
    --splits test \
    --output results/

# Specify metrics
gge --real real.h5ad --generated generated.h5ad \
    --conditions perturbation \
    --metrics pearson spearman wasserstein_1 mmd r_squared \
    --output results/

DEG-Space Evaluation

GGE supports evaluating generative models specifically on differentially expressed genes (DEGs), which focuses the evaluation on the genes that matter most for capturing perturbation effects (Paper Section 4.3).

The Problem: Computing correlation on raw expression means can be artificially high if control and perturbed conditions have similar expression—dominated by genes similarly expressed across conditions.

The Solution: Perturbation-Effect Correlation (Paper Equation 1):

ρ_effect = corr(μ_real - μ_ctrl, μ_gen - μ_ctrl)

This measures whether models capture the direction and magnitude of perturbation effects, not just absolute expression levels.

from gge import (
    evaluate_deg_space, 
    identify_degs, 
    compute_perturbation_effects,
    compute_perturbation_effect_correlation,
)
import scanpy as sc

real_adata = sc.read_h5ad("real_data.h5ad")
generated_adata = sc.read_h5ad("generated_data.h5ad")

# Evaluate in DEG space (automatically identifies DEGs)
results, deg_info = evaluate_deg_space(
    real_data=real_adata,
    generated_data=generated_adata,
    condition_columns=["perturbation"],
    deg_condition_column="perturbation",  # Column for DEG identification
    control_value="control",               # Control condition label
    log2fc_threshold=1.0,                  # |log2FC| > 1
    pvalue_threshold=0.05,                 # Adjusted p-value < 0.05
    return_degs=True,
)

# View identified DEGs
print(f"Found {deg_info['is_deg'].sum()} DEGs")
deg_genes = deg_info[deg_info['is_deg']]['gene'].tolist()

# Compute perturbation-effect correlation (Paper Eq. 1)
control_mask = real_adata.obs['perturbation'] == 'control'
control_mean = real_adata[control_mask].X.mean(axis=0)

perturbed_mask = real_adata.obs['perturbation'] != 'control'
rho_effect = compute_perturbation_effect_correlation(
    real_perturbed=real_adata[perturbed_mask].X,
    generated_perturbed=generated_adata[perturbed_mask].X,
    control_mean=control_mean,
    method="pearson",  # or "spearman"
)
print(f"Perturbation-effect correlation: {rho_effect:.3f}")

# Compute fold changes for analysis
effects = compute_perturbation_effects(
    real_adata,
    condition_column="perturbation",
    control_value="control",
)

PC-Space Evaluation

For comparing global structure efficiently, GGE provides PC-space (principal component) evaluation (see Paper Section 3.3):

from gge import evaluate_pc_space, compute_pca, PCSpaceEvaluator

# Quick evaluation in PC space
results = evaluate_pc_space(
    real_data=real_adata,
    generated_data=generated_adata,
    condition_columns=["perturbation"],
    n_components=50,              # Number of PCs
    use_highly_variable=True,     # Filter to HVGs first
    n_top_genes=2000,             # Number of HVGs
)
print(results.summary())

# Or use the evaluator class for more control
evaluator = PCSpaceEvaluator(n_components=50)
real_pc, gen_pc = evaluator.transform_to_pc_space(real_adata, generated_adata)

# Access PC coordinates
real_coords = real_pc.obsm['X_pca']  # shape: (n_samples, n_components)
gen_coords = gen_pc.obsm['X_pca']

# Compute PCA on a single dataset
adata_pca = compute_pca(real_adata, n_components=50)

Combined Evaluation Strategy

For comprehensive evaluation, combine gene-space, DEG-space, and PC-space metrics:

from gge import evaluate, evaluate_deg_space, evaluate_pc_space

# 1. Full gene-space evaluation
gene_results = evaluate(
    real_data=real_adata,
    generated_data=generated_adata,
    condition_columns=["perturbation"],
    metrics=["pearson", "spearman", "r_squared", "wasserstein_1", "mmd"],
)

# 2. DEG-space evaluation (perturbation-focused)
deg_results, degs = evaluate_deg_space(
    real_data=real_adata,
    generated_data=generated_adata,
    condition_columns=["perturbation"],
    deg_condition_column="perturbation",
    control_value="control",
    return_degs=True,
)

# 3. PC-space evaluation (global structure)
pc_results = evaluate_pc_space(
    real_data=real_adata,
    generated_data=generated_adata,
    condition_columns=["perturbation"],
    n_components=50,
)

print("Gene-space:", gene_results.summary())
print("DEG-space:", deg_results.summary())
print("PC-space:", pc_results.summary())

Mixed-Space Evaluation (Paper API)

For maximum flexibility, use evaluate_lazy() with per-metric space configuration:

from gge import evaluate_lazy
from gge.metrics import (
    PearsonCorrelation,
    Wasserstein2Distance,
    MMDDistance,
    RSquared,
)

# Define metrics with different computation spaces
metrics = [
    # Correlation in DEG space (biologically-focused)
    PearsonCorrelation(space="deg", deg_lfc=0.25, deg_pval=0.1),
    
    # Distributional metrics in PCA space (global structure)
    Wasserstein2Distance(space="pca", n_components=50),
    MMDDistance(space="pca", n_components=50),
    
    # R-squared in raw space
    RSquared(space="raw"),
]

# Evaluate with mixed spaces
results = evaluate_lazy(
    real_path="real_data.h5ad",
    generated_path="generated_data.h5ad",
    condition_columns=["perturbation"],  # Can also be a string
    control_key="ctrl",  # Required for DEG space
    metrics=metrics,
)

print(results.summary())

Metric names automatically include space suffixes: pearson_deg, wasserstein_2_pca50, mmd_pca50, r_squared.

Expected Data Format

GGE expects AnnData (h5ad) files with:

Required

  • adata.X: Gene expression matrix (samples × genes)
  • adata.var_names: Gene identifiers (must overlap between datasets)
  • adata.obs[condition_columns]: Columns for matching conditions

Optional

  • adata.obs[split_column]: Train/test split indicator

Output Structure

output/
├── summary.json          # Aggregate metrics and metadata
├── results.csv           # Per-condition metrics table
├── per_gene_*.csv        # Per-gene metric values
└── plots/
    ├── boxplot_metrics.png
    ├── violin_metrics.png
    ├── radar_split.png
    ├── scatter_grid.png
    └── embedding_pca.png

Contributing

Contributions are welcome! Please feel free to submit a pull request or open an issue.

Citation

If you use GGE in your research, please cite our paper:

@misc{rubbi2026gge,
  title  = {A Standardized Framework for Evaluating Gene Expression Generative Models},
  author = {Rubbi, Andrea and Di Francesco, Andrea Giuseppe and Lotfollahi, Mohammad and Liò, Pietro},
  year   = {2026},
  note   = {Presented at the GenAI in Genomics Workshop at ICLR 2026},
  url    = {https://genai-in-genomics.github.io/}
}

or the software

@software{rubbi2026gge,
  author = {Rubbi, Andrea},
  title  = {GGE: Generated Genetic Expression Evaluator},
  year   = {2026},
  url    = {https://github.com/AndreaRubbi/GGE}
}

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gge_eval-1.0.0.tar.gz (68.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gge_eval-1.0.0-py3-none-any.whl (81.5 kB view details)

Uploaded Python 3

File details

Details for the file gge_eval-1.0.0.tar.gz.

File metadata

  • Download URL: gge_eval-1.0.0.tar.gz
  • Upload date:
  • Size: 68.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.12.10 Darwin/25.3.0

File hashes

Hashes for gge_eval-1.0.0.tar.gz
Algorithm Hash digest
SHA256 e84f5b7f84424974c4dcbf905ca46a5289b362380c2aa8406f9b6e8c239a5873
MD5 f62fd315c850b8f5e9c9bf7e40109ac7
BLAKE2b-256 17b5a207e370fe0cb51a4182628be87b77f0eaf0b867c55e3bdfac7409d26769

See more details on using hashes here.

File details

Details for the file gge_eval-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: gge_eval-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 81.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.12.10 Darwin/25.3.0

File hashes

Hashes for gge_eval-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 838a930facc5734a7de00c4aa29dee94008940b487d9c5e55b871f0e6138bc0f
MD5 4f6fda52e18d7afee824175b16a5200a
BLAKE2b-256 c363269780e5adb5bd16a5460d1bccfd16cef05536ac5b100997f11794fe66bd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page