Skip to main content

Evaluation metrics for single-cell perturbation predictions

Project description

cell-eval

Description

This package provides a comprehensive suite of metrics for evaluating the performance of models that predict cellular responses to perturbations at the single-cell level. It can be used either as a command-line tool or as a Python module.

Installation

Distribution with uv

# install from pypi
uv pip install -U cell-eval

# install from github directly
uv pip install -U git+https://github.com/arcinstitute/cell-eval

# install cli with uv tool
uv tool install -U git+https://github.com/arcinstitute/cell-eval

# Check installation
cell-eval --help

Usage

To get started you'll need to have two anndata files.

  1. a predicted anndata (adata_pred).
  2. a real anndata to compare against (adata_real).

Prep (VCC)

To prepare an anndata for VCC evaluation you can use the cell-eval prep command. This will strip the anndata to bare essentials, compress it, adjust naming conventions, and ensure compatibility with the evaluation framework.

This step is optional for downstream usage, but recommended for optimal performance and compatibility.

Run this on your predicted anndata:

cell-eval prep \
    -i <your/path/to>.h5ad \
    -g <expected_genelist>

Run

To run an evaluation between two anndatas you can use the cell-eval run command.

This will run differential expression for each anndata and then run a suite of evaluation metrics to compare the two (select your suite of metrics with the --profile flag).

To save time you can submit precomputed differential expression results, see the cell-eval run --help menu for more information.

cell-eval run \
    -ap <your/path/to/pred>.h5ad \
    -ar <your/path/to/real>.h5ad \
    --num-threads 64 \
    --profile full

To run this as a python module you will need to use the MetricsEvaluator class.

from cell_eval import MetricsEvaluator
from cell_eval.data import build_random_anndata, downsample_cells

adata_real = build_random_anndata()
adata_pred = downsample_cells(adata_real, fraction=0.5)
evaluator = MetricsEvaluator(
    adata_pred=adata_pred,
    adata_real=adata_real,
    control_pert="control",
    pert_col="perturbation",
    num_threads=64,
)
(results, agg_results) = evaluator.compute()

This will give you metric evaluations for each perturbation individually (results) and aggregated results over all perturbations (agg_results).

Score

To normalize your scores against a baseline you can run the cell-eval score command.

This accepts two agg_results.csv (or agg_results objects in python) as input.

cell-eval score \
    --user-input <your/path/to/user>/agg_results.csv \
    --base-input <your/path/to/base>/agg_results.csv

Or from python:

from cell_eval import score_agg_metrics

user_input = "./cell-eval-user/agg_results.csv"
base_input = "./cell-eval-base/agg_results.csv"
output_path = "./score.csv"

score_agg_metrics(
    results_user=user_input,
    results_base=base_input,
    output=output_path,
)

Library Design

The metrics are built using the python registry pattern. This allows for easy extension for new metrics with a well-typed interface.

Take a look at existing metrics in cell_eval.metrics to get started.

Development

This work is open-source and welcomes contributions. Feel free to submit a pull request or open an issue.

Citation

Any publication that uses this source code should cite the State paper.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cell_eval-0.6.8.tar.gz (32.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cell_eval-0.6.8-py3-none-any.whl (38.2 kB view details)

Uploaded Python 3

File details

Details for the file cell_eval-0.6.8.tar.gz.

File metadata

  • Download URL: cell_eval-0.6.8.tar.gz
  • Upload date:
  • Size: 32.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for cell_eval-0.6.8.tar.gz
Algorithm Hash digest
SHA256 4fab9b01f75c688e8d3be933b8390638eef2a3d92b36450d79167b0b6e9cd609
MD5 ebd95ce9ddf6e067ce5e85c8b7670840
BLAKE2b-256 a93f635dfd5ad56c1e225e02c3968e565c71ccb3c8065613c84733ef267cf6fb

See more details on using hashes here.

File details

Details for the file cell_eval-0.6.8-py3-none-any.whl.

File metadata

  • Download URL: cell_eval-0.6.8-py3-none-any.whl
  • Upload date:
  • Size: 38.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for cell_eval-0.6.8-py3-none-any.whl
Algorithm Hash digest
SHA256 f03bcbea2bc5556035270205a983b54d6885fb6849407211c0e28c7e92f623df
MD5 9beef7681bc1db1d1790350e725cd8ca
BLAKE2b-256 7f590933bab03c3160e021cd1305b3c6a62b1b864e541f6ed82c07b3df225dbf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page