Skip to main content

Evaluation metrics for single-cell perturbation predictions

Project description

cell-eval

Description

This package provides a comprehensive suite of metrics for evaluating the performance of models that predict cellular responses to perturbations at the single-cell level. It can be used either as a command-line tool or as a Python module.

Installation

Distribution with uv

# install from pypi
uv pip install -U cell-eval

# install from github directly
uv pip install -U git+https://github.com/arcinstitute/cell-eval

# install cli with uv tool
uv tool install -U git+https://github.com/arcinstitute/cell-eval

# Check installation
cell-eval --help

Usage

To get started you'll need to have two anndata files.

  1. a predicted anndata (adata_pred).
  2. a real anndata to compare against (adata_real).

Prep (VCC)

To prepare an anndata for VCC evaluation you can use the cell-eval prep command. This will strip the anndata to bare essentials, compress it, adjust naming conventions, and ensure compatibility with the evaluation framework.

This step is optional for downstream usage, but recommended for optimal performance and compatibility.

Run this on your predicted anndata:

cell-eval prep \
    -i <your/path/to>.h5ad \
    -g <expected_genelist>

Run

To run an evaluation between two anndatas you can use the cell-eval run command.

This will run differential expression for each anndata and then run a suite of evaluation metrics to compare the two (select your suite of metrics with the --profile flag).

To save time you can submit precomputed differential expression results, see the cell-eval run --help menu for more information.

cell-eval run \
    -ap <your/path/to/pred>.h5ad \
    -ar <your/path/to/real>.h5ad \
    --num-threads 64 \
    --profile full

To run this as a python module you will need to use the MetricsEvaluator class.

from cell_eval import MetricsEvaluator
from cell_eval.data import build_random_anndata, downsample_cells

adata_real = build_random_anndata()
adata_pred = downsample_cells(adata_real, fraction=0.5)
evaluator = MetricsEvaluator(
    adata_pred=adata_pred,
    adata_real=adata_real,
    control_pert="control",
    pert_col="perturbation",
    num_threads=64,
)
(results, agg_results) = evaluator.compute()

This will give you metric evaluations for each perturbation individually (results) and aggregated results over all perturbations (agg_results).

Score

To normalize your scores against a baseline you can run the cell-eval score command.

This accepts two agg_results.csv (or agg_results objects in python) as input.

cell-eval score \
    --user-input <your/path/to/user>/agg_results.csv \
    --base-input <your/path/to/base>/agg_results.csv

Or from python:

from cell_eval import score_agg_metrics

user_input = "./cell-eval-user/agg_results.csv"
base_input = "./cell-eval-base/agg_results.csv"
output_path = "./score.csv"

score_agg_metrics(
    results_user=user_input,
    results_base=base_input,
    output=output_path,
)

Library Design

The metrics are built using the python registry pattern. This allows for easy extension for new metrics with a well-typed interface.

Take a look at existing metrics in cell_eval.metrics to get started.

Development

This work is open-source and welcomes contributions. Feel free to submit a pull request or open an issue.

Citation

Any publication that uses this source code should cite the State paper.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cell_eval-0.6.2.tar.gz (31.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cell_eval-0.6.2-py3-none-any.whl (37.4 kB view details)

Uploaded Python 3

File details

Details for the file cell_eval-0.6.2.tar.gz.

File metadata

  • Download URL: cell_eval-0.6.2.tar.gz
  • Upload date:
  • Size: 31.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.3

File hashes

Hashes for cell_eval-0.6.2.tar.gz
Algorithm Hash digest
SHA256 9dfa5c11e67010f36828cf3c1cce25916e38755f9f86383596773a0233d03d99
MD5 8c9a21f532e2bde867137d682118e7a9
BLAKE2b-256 57ae97342936d312e3e564348a70fbccbb420e057447c43b72bed2de7a2f1916

See more details on using hashes here.

File details

Details for the file cell_eval-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: cell_eval-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 37.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.3

File hashes

Hashes for cell_eval-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6e0aeb9730d6738b3695948f1bc0ce2e2dc03b4172ac936b2293aeedfbbfcf65
MD5 5ff2994174ef36d9035eac778730a09f
BLAKE2b-256 5e9336e19688095b8488e9b2c6aa53cf633723ac592476044e31e17bee4b9b5b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page