Skip to main content

Evaluation metrics for single-cell perturbation predictions

Project description

cell-eval

Description

This package provides a comprehensive suite of metrics for evaluating the performance of models that predict cellular responses to perturbations at the single-cell level. It can be used either as a command-line tool or as a Python module.

Installation

Distribution with uv

# install from pypi
uv pip install -U cell-eval

# install from github directly
uv pip install -U git+https://github.com/arcinstitute/cell-eval

# install cli with uv tool
uv tool install -U git+https://github.com/arcinstitute/cell-eval

# Check installation
cell-eval --help

Usage

To get started you'll need to have two anndata files.

  1. a predicted anndata (adata_pred).
  2. a real anndata to compare against (adata_real).

Prep (VCC)

To prepare an anndata for VCC evaluation you can use the cell-eval prep command. This will strip the anndata to bare essentials, compress it, adjust naming conventions, and ensure compatibility with the evaluation framework.

This step is optional for downstream usage, but recommended for optimal performance and compatibility.

Run this on your predicted anndata:

cell-eval prep \
    -i <your/path/to>.h5ad \
    -g <expected_genelist>

Run

To run an evaluation between two anndatas you can use the cell-eval run command.

This will run differential expression for each anndata and then run a suite of evaluation metrics to compare the two (select your suite of metrics with the --profile flag).

To save time you can submit precomputed differential expression results, see the cell-eval run --help menu for more information.

cell-eval run \
    -ap <your/path/to/pred>.h5ad \
    -ar <your/path/to/real>.h5ad \
    --num-threads 64 \
    --profile full

To run this as a python module you will need to use the MetricsEvaluator class.

from cell_eval import MetricsEvaluator
from cell_eval.data import build_random_anndata, downsample_cells

adata_real = build_random_anndata()
adata_pred = downsample_cells(adata_real, fraction=0.5)
evaluator = MetricsEvaluator(
    adata_pred=adata_pred,
    adata_real=adata_real,
    control_pert="control",
    pert_col="perturbation",
    num_threads=64,
)
(results, agg_results) = evaluator.compute()

This will give you metric evaluations for each perturbation individually (results) and aggregated results over all perturbations (agg_results).

Score

To normalize your scores against a baseline you can run the cell-eval score command.

This accepts two agg_results.csv (or agg_results objects in python) as input.

cell-eval score \
    --user-input <your/path/to/user>/agg_results.csv \
    --base-input <your/path/to/base>/agg_results.csv

Or from python:

from cell_eval import score_agg_metrics

user_input = "./cell-eval-user/agg_results.csv"
base_input = "./cell-eval-base/agg_results.csv"
output_path = "./score.csv"

score_agg_metrics(
    results_user=user_input,
    results_base=base_input,
    output=output_path,
)

Library Design

The metrics are built using the python registry pattern. This allows for easy extension for new metrics with a well-typed interface.

Take a look at existing metrics in cell_eval.metrics to get started.

Development

This work is open-source and welcomes contributions. Feel free to submit a pull request or open an issue.

Citation

Any publication that uses this source code should cite the State paper.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cell_eval-0.6.1.tar.gz (31.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cell_eval-0.6.1-py3-none-any.whl (37.4 kB view details)

Uploaded Python 3

File details

Details for the file cell_eval-0.6.1.tar.gz.

File metadata

  • Download URL: cell_eval-0.6.1.tar.gz
  • Upload date:
  • Size: 31.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.0

File hashes

Hashes for cell_eval-0.6.1.tar.gz
Algorithm Hash digest
SHA256 32fe4ee96f679c063b1e61f9b96ba9f8e4e1a2dae1d82b536535c8f83e9a36e4
MD5 77df1a6653a9260da75533da7a54ddc9
BLAKE2b-256 0b16deb8bd8f2e93178cb50e1781e70f49cba812ca86df1c94828b63250d9e6a

See more details on using hashes here.

File details

Details for the file cell_eval-0.6.1-py3-none-any.whl.

File metadata

  • Download URL: cell_eval-0.6.1-py3-none-any.whl
  • Upload date:
  • Size: 37.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.0

File hashes

Hashes for cell_eval-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6d94477228484dbdb5058c2cdb69b86fc95f71cedea0da0e4f0dac2336a25683
MD5 11068add816d036c9068d9b28ecbaaf6
BLAKE2b-256 6baf87ed1c5650b97f65daff826be1c065f519b0feea6f159b4fb80ae55695b9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page