Skip to main content

Heimdall: A Comprehensive Paradigm for Evaluating Single-Cell Representations within Foundational Models

Project description

Lint

Heimdall

Installation

# Clone repository
git clone https://github.com/gkrieg/Heimdall && cd Heimdall

# Create conda env
conda create --name heimdall python=3.10 && conda activate heimdall

# Install dependencies
pip install torch==2.0.1+cu118 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

# Install Heimdall (in editable `-e` mode)
pip install -e .

Quickstart

train.py provides a clear overview of the inputs needed, how to prepare the data, model, optimizer, and run the trainer.

python train.py +experiments=cta_pancreas

Make sure to edit the global file config/global_vars.yaml based on your set up.

Sweeps

scripts/create_sweep.py has the arguments --experiment-name (the hydra experiment file name), --project-name (W&B project name), --fg and --fc which are the names of the hydra configs. It is a short script that will load in sweeps/base.yaml and updates it appropriately, and creates a sweep argument and returns it. This can work in tandem with deploy_sweep.sh to submit multiple sweeps on SLURM systems.

python scripts/create_sweep.py --experiment-name cta_pancreas --project-name Pancreas-Celltype-Classification

Dev Notes

Dev installation

pip install -r requirements.txt

Once the pre-commit command line tool is installed, every time you commit some changes, it will perform several code-style checks and automatically apply some fixes for you (if there is any issue). When auto-fixes are applied, you need to recommit those changes. Note that this process can take more than one round.

After you are done committing changes and are ready to push the commits to the remote branch, run nox to perform a final quality check. Note that nox is linting only and does not fix the issues for you. You need to address the issues manually based on the instructions provided.

Cheatsheet

# Run cell type classification dev experiment with wandb disabled
WANDB_MODE=disabled python train.py +experiments=cta_pancreas

# Run cell type classification dev experiment with wandb offline mode
WANDB_MODE=offline python train.py +experiments=cta_pancreas

# Run cell cell interaction dev experiment with wandb disabled
WANDB_MODE=disabled python train.py +experiments=cta_pancreas

# Run cell cell interaction dev experiment with wandb disabled and overwrite epochs
WANDB_MODE=disabled python train.py +experiments=cta_pancreas tasks.args.epochs=2

# Run cell cell interaction dev experiment with user profile (dev has wandb disabled by default)
python train.py +experiments=cta_pancreas user=lane-remy-dev

Nox

Run code linting and unittests:

nox

Run dev experiments test on Lane compute node with CUDA (lane-shared-dev user profile):

nox -e test_experiments

Run fast dev experiments (only selected small datasets):

nox -e test_experiments -- quick_run

Run full dev experiments (including those datasets):

nox -e test_experiments -- full_run

Run dev experiments with a different user profile:

nox -e test_experiments -- user=box-remy-dev

Local tests

We use pytest to write local tests. New test suites can be added under tests/test_{suite_name}.py.

Run a particular test suite with:

python -m pytest tests/test_{suite_name}.py

Run all tests but the integration test:

python -m pytest -m 'not integration'

Note: to run the integration test, you'll need to specify the Hydra user using a .env file. The contents of the file should be like so:

HYDRA_USER=test

Turning off caching

To turn off dataset caching for dev purposes, set cache_preprocessed_dataset_dir: null in config/global_vars.yaml. Alternatively, pass cache_preprocessed_dataset_dir=null through the command line, e.g.,

python train.py +experiments=cta_pancreas cache_preprocessed_dataset_dir=null

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sc_heimdall-0.1.0.tar.gz (31.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sc_heimdall-0.1.0-py3-none-any.whl (36.2 kB view details)

Uploaded Python 3

File details

Details for the file sc_heimdall-0.1.0.tar.gz.

File metadata

  • Download URL: sc_heimdall-0.1.0.tar.gz
  • Upload date:
  • Size: 31.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.27.2

File hashes

Hashes for sc_heimdall-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f87da0fe768d4043d58b275afbf254240777a88650749b2bb01c5f0074ae4223
MD5 0c63401e4d9d16f653ff9e4831017f77
BLAKE2b-256 0b1a9a94a6e2efb5ef15002db42f5d77fa04ce9243663d36a7498f36eec93c02

See more details on using hashes here.

File details

Details for the file sc_heimdall-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: sc_heimdall-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 36.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.27.2

File hashes

Hashes for sc_heimdall-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e276cd5e52ab7204edbcbbac0d1d36837f839723f544fae729d924fb311804a7
MD5 e34bc68f876f5b84305bef037b861a86
BLAKE2b-256 bedade51b3747b9173dab164f8e998b02cc268643def9e987229865eb026521b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page