Skip to main content

Evaluation metrics for single-cell perturbation predictions

Project description

cell-eval

Description

This package provides a comprehensive suite of metrics for evaluating the performance of models that predict cellular responses to perturbations at the single-cell level. It can be used either as a command-line tool or as a Python module.

Installation

Distribution with uv

# install from pypi
uv pip install -U cell-eval

# install from github directly
uv pip install -U git+https://github.com/arcinstitute/cell-eval

# install cli with uv tool
uv tool install -U git+https://github.com/arcinstitute/cell-eval

# Check installation
cell-eval --help

Usage

To get started you'll need to have two anndata files.

  1. a predicted anndata (adata_pred).
  2. a real anndata to compare against (adata_real).

Prep (VCC)

To prepare an anndata for VCC evaluation you can use the cell-eval prep command. This will strip the anndata to bare essentials, compress it, adjust naming conventions, and ensure compatibility with the evaluation framework.

This step is optional for downstream usage, but recommended for optimal performance and compatibility.

Run this on your predicted anndata:

cell-eval prep \
    -i <your/path/to>.h5ad \
    -g <expected_genelist>

Run

To run an evaluation between two anndatas you can use the cell-eval run command.

This will run differential expression for each anndata and then run a suite of evaluation metrics to compare the two (select your suite of metrics with the --profile flag).

To save time you can submit precomputed differential expression results, see the cell-eval run --help menu for more information.

cell-eval run \
    -ap <your/path/to/pred>.h5ad \
    -ar <your/path/to/real>.h5ad \
    --num-threads 64 \
    --profile full

To run this as a python module you will need to use the MetricsEvaluator class.

from cell_eval import MetricsEvaluator
from cell_eval.data import build_random_anndata, downsample_cells

adata_real = build_random_anndata()
adata_pred = downsample_cells(adata_real, fraction=0.5)
evaluator = MetricsEvaluator(
    adata_pred=adata_pred,
    adata_real=adata_real,
    control_pert="control",
    pert_col="perturbation",
    num_threads=64,
)
(results, agg_results) = evaluator.compute()

This will give you metric evaluations for each perturbation individually (results) and aggregated results over all perturbations (agg_results).

Score

To normalize your scores against a baseline you can run the cell-eval score command.

This accepts two agg_results.csv (or agg_results objects in python) as input.

cell-eval score \
    --user-input <your/path/to/user>/agg_results.csv \
    --base-input <your/path/to/base>/agg_results.csv

Or from python:

from cell_eval import score_agg_metrics

user_input = "./cell-eval-user/agg_results.csv"
base_input = "./cell-eval-base/agg_results.csv"
output_path = "./score.csv"

score_agg_metrics(
    results_user=user_input,
    results_base=base_input,
    output=output_path,
)

Library Design

The metrics are built using the python registry pattern. This allows for easy extension for new metrics with a well-typed interface.

Take a look at existing metrics in cell_eval.metrics to get started.

Development

This work is open-source and welcomes contributions. Feel free to submit a pull request or open an issue.

Citation

Any publication that uses this source code should cite the State paper.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cell_eval-0.6.4.tar.gz (31.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cell_eval-0.6.4-py3-none-any.whl (37.4 kB view details)

Uploaded Python 3

File details

Details for the file cell_eval-0.6.4.tar.gz.

File metadata

  • Download URL: cell_eval-0.6.4.tar.gz
  • Upload date:
  • Size: 31.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.7

File hashes

Hashes for cell_eval-0.6.4.tar.gz
Algorithm Hash digest
SHA256 3d4b699f9a597577c2fc841d64ca7af8c75e27b9dbab0788a0c01a23148a32bc
MD5 2476d0feb0710148cb05bc42571cd596
BLAKE2b-256 aa82815df24d08be29ab39a74ca0a2962ed1cda2faea313b00207c1dd7587c24

See more details on using hashes here.

File details

Details for the file cell_eval-0.6.4-py3-none-any.whl.

File metadata

  • Download URL: cell_eval-0.6.4-py3-none-any.whl
  • Upload date:
  • Size: 37.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.7

File hashes

Hashes for cell_eval-0.6.4-py3-none-any.whl
Algorithm Hash digest
SHA256 ceea095c27604490a370a99399f2b1666f3abc627613e4573a9cfa0d932e8af0
MD5 836068c2e1e0a1eb7259cf9e52e43282
BLAKE2b-256 c3d7ab927d68af83f43cbc253622a210421fab0188d3664aeed2efac480487cd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page