SEAM: Meta-explanations for interpreting sequence-based deep learning models
Project description
SEAM: systematic explanation of attribution-based mechanisms for regulatory genomics
This repository contains the Python implementation of SEAM (Systematic Explanation of Attribution-based Mechanisms), an AI interpretation framework that systematically investigates how mutations reshape regulatory mechanisms. For an extended discussion of this approach and its applications, please refer to our manuscript, which we presented at the ICLR 2025 GEM Workshop:
- Seitz, E.E., McCandlish, D.M., Kinney, J.B., and Koo P.K. Decoding the Mechanistic Impact of Genetic Variation on Regulatory Sequences with Deep Learning. Workshop on Generative and Experimental Perspectives for Biomolecular Design, International Conference on Learning Representations, April 15, 2025. https://openreview.net/forum?id=PtjMeyHcTt
A bioRxiv preprint is also currently underway.
Installation:
With Anaconda sourced, create a new environment via the command line:
conda create --name seam
Next, activate this environment via conda activate seam
, and install the following packages:
pip install seam-nn
Finally, when you are done using the environment, always exit via conda deactivate
.
Notes
SEAM has been tested on Mac and Linux operating systems. Typical installation time on a normal computer is less than 1 minute.
If you have any issues installing SEAM, please see:
- https://seam-nn.readthedocs.io/en/latest/installation.html
- https://github.com/evanseitz/seam-nn/issues
For issues installing SQUID, the package used for sequence generation and inference, please see:
- https://squid-nn.readthedocs.io/en/latest/installation.html
- https://github.com/evanseitz/squid-nn/issues
Older DNNs that require inference via Tensorflow 1.x or related packages may be in conflict with SEAM defaults. Users will need to run SEAM piecewise within separate environments:
- Tensorflow 1.x environment for generating in silico sequence-function-mechanism dataset
- Tensorflow 2.x environment for applying SEAM to explain in silico sequence-function-mechanism dataset
Usage and Requirements:
SEAM provides a unified interface for mechanistic interpretation of sequence-based deep learning models.
The framework takes as input a sequence-based oracle (e.g., a genomic DNN) and requires four key components to perform analysis:
-
Sequence Library (
numpy.ndarray
): One-hot encoded sequences of shape (N, L, A), where:- N: Number of sequences
- L: Sequence length
- A: Number of features (e.g., 4 for DNA nucleotides)
-
Predictions/Measurements (
numpy.ndarray
): Experimental or model-derived values of shape (N,1), corresponding to each sequence's functional output. -
Attribution Maps (
numpy.ndarray
): Mechanistic importance scores of shape (N, L, A), quantifying the contribution of each position-feature pair to the sequence's function. These can be generated using various attribution methods: -
Clustering/Embedding (either):
- Hierarchical clustering linkage matrix (e.g., from
scipy.cluster.hierarchy.linkage
) - Dimensionality reduction embedding of shape (N,Z), where Z is the number of dimensions in the embedded space
- Hierarchical clustering linkage matrix (e.g., from
These required files can be generated either externally or using SEAM's specialized modules (described below). Once provided, SEAM applies a meta-explanation approach to interpret the sequence-function-mechanism dataset, deciphering the determinants of mechanistic variation in regulatory sequences.
For detailed examples of how to generate these requirements using SEAM's modules and apply the analysis pipeline to reproduce key findings from our main manuscript, see the Examples section at the end of this document.
SEAM Modules:
SEAM's analysis pipeline is implemented through several specialized modules that work together:
-
Mutagenizer (from SQUID): Generates in silico sequence libraries through various mutagenesis strategies, including local, global, optimized, and complete libraries (supporting all combinatorial mutations up to a specified order). Features GPU-acceleration and batch processing for efficient sequence generation.
-
Compiler: Standardizes sequence analysis by converting one-hot encoded sequences to string format and computing associated metrics. Compiles sequences and functional properties into a DataFrame, with support for metrics such as Hamming distances and global importance analysis scores. Implements GPU-accelerated sequence conversion and vectorized operations.
-
Attributer: Computes attribution maps that quantify the base-wise contribution to regulatory activity. SEAM provides GPU-accelerated implementations of Saliency Maps, IntGrad, SmoothGrad, and ISM. DeepSHAP is not yet optimized for efficient batch processing across the sequence library. Examples for incorportating DeepSHAP using external scripts are provided in the
examples
folder. -
Clusterer: Computes mechanistic clusters and embeddings from attribution maps to identify distinct regulatory mechanisms. Supports hierarchical clustering (GPU-optimized), K-means, and DBSCAN algorithms, with optional dimensionality reduction (UMAP, t-SNE, PCA) for complementary interpretability.
-
MetaExplainer: The core SEAM module that integrates results to identify and interpret mechanistic patterns. Generates cluster-averaged attribution maps (shape: (L, A) for each cluster) and the Mechanism Summary Matrix (MSM), a DataFrame containing position-wise statistics (entropy, consensus matches, reference mismatches) for each cluster. Also implements background separation and provides visualization tools for sequence logos, attribution logos, and cluster statistics, with support for both PWM-based and enrichment-based analysis. Features GPU acceleration with CPU fallbacks.
-
Identifier: Analyzes cluster-averaged attribution maps in conjunction with the MSM to identify precise locations of motifs and their epistatic interactions.
Module Relationships:
SEAM's modules form an integrated pipeline where outputs from earlier modules feed into subsequent analysis. The Mutagenizer generates sequences that are processed by the Compiler and Attributer. These attribution maps are then clustered by the Clusterer, with results from Mutagenizer, Compiler and Attributer integrated by the MetaExplainer to characterize each SEAM-derived mechanism. The Identifier module then analyzes these MetaExplainer outputs to pinpoint specific regulatory elements and their interactions.
Examples
Google Colab examples for applying SEAM on previously-published deep learning models are available at the links below.
Note: Due to memory requirements for calculating distance matrices, Colab Pro may be required for examples using hierarchical clustering with their current settings.
- Local library to annotate all TFBSs and biophysical states
- DeepSTARR: Enhancer 20647 (Fig.2a)
- Local library with 30k sequences and 10% mutation rate | Integrated gradients; hierarchical clustering
- Expected run time: ~3.2 minutes on Colab A100 GPU
- Local library to reveal low-affinity motifs using background separation
- DeepSTARR: Enhancer 5353 (Fig.TBD)
- Local library with 60k sequences and 10% mutation rate | Integrated gradients; hierarchical clustering
- Expected run time: ~8.5 minutes on Colab A100 GPU
- Local library to explore mechanism space of an enhancer TFBS
- DeepSTARR: Enhancer 13748 (SFig.TBD)
- Local library with 100k sequence and 10% mutation rate | Saliency maps; UMAP with K-Means clustering
- Expected run time: ~3.9 minutes on Colab A100 GPU
- Combinatorial-complete library with empirical mutagenesis maps
- PBM: Zfp187 (Fig.TBD)
- Combinatorial-complete library with 65,536 sequences | ISM; Hierarchical clustering
- Expected run time: ~12 minutes on Colab A100 GPU
- Combinatorial-complete library with interactive mechanism space viewer
- PBM: Hnf4a (Fig.TBD)
- Combinatorial-complete library with 65,536 sequences | ISM; UMAP with K-Means clustering
- Expected run time: ~4.9 minutes on Colab A100 GPU
- Global library to compare mechanistic heterogeneity of an enhancer TFBS
- DeepSTARR: CREB/ATF (Fig.TBD)
- Global library with 100k sequences | Saliency maps: UMAP with K-Means clustering
- Expected run time: ~3.2 minutes on Colab A100 GPU
- Global library to compare mechanisms across different developmental programs
- DeepSTARR: DRE (Fig.TBD)
- Global library with 100k sequences | Saliency maps; UMAP with K-Means clustering
- Expected run time: ~2.7 minutes on Colab A100 GPU
- Global library to compare mechanisms associated with genomic and synthetic TFBSs
- DeepSTARR: AP-1 (Fig.TBD)
- Global library with 100k sequences | Integrated gradients; UMAP with K-Means clustering
- Expected run time: ~3.9 minutes on Colab A100 GPU
Python script examples are provided in the examples
folder for locally running SEAM and exporting outputs to file. Some of these examples include models that are not compatible with the latest libraries supported by Google Colab, including:
- Local library to analyze foreground and background signals at human promotors and enhancers
- ChromBPNet: PPIF promoter/enhancer (Fig.3)
- Local library with 100k sequences and 10% mutation rate | {Saliency, IntGrad, SmoothGrad, ISM}; Hierarchical clustering
Additional dependencies for these Python examples may be required and outlined at the top of each script.
SEAM Interactive Interpretability Tool:
Interactive interpretability tools are currently under development and will be available in a future release. These tools will provide a graphic user interface (GUI) for dynamically interpreting SEAM results, allowing users to explore and analyze pre-computed inputs from the example scripts above.
Citation:
If this code is useful in your work, please cite our paper.
bibtex TODO
License:
Copyright (C) 2023–2025 Evan Seitz, David McCandlish, Justin Kinney, Peter Koo
The software, code sample and their documentation made available on this website could include technical or other mistakes, inaccuracies or typographical errors. We may make changes to the software or documentation made available on its web site at any time without prior notice. We assume no responsibility for errors or omissions in the software or documentation available from its web site. For further details, please see the LICENSE file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file seam_nn-0.5.6.tar.gz
.
File metadata
- Download URL: seam_nn-0.5.6.tar.gz
- Upload date:
- Size: 108.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
282da235d1d9eb48b71c28d9dd27f8a0aafc21ed3f3443db454af1e96385b1ab
|
|
MD5 |
b2bbfda9dfbd5b9b53cd47ef7e678bcd
|
|
BLAKE2b-256 |
76df3b8aadc87e0ed58aa4a1ee874cced77f1954c424d61d81ccb90b1296df56
|
File details
Details for the file seam_nn-0.5.6-py3-none-any.whl
.
File metadata
- Download URL: seam_nn-0.5.6-py3-none-any.whl
- Upload date:
- Size: 117.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
49e397eb36ae19aa8d307d9e5f9ae9f6768490b72ba00cc39b54e51e79cdffde
|
|
MD5 |
c2198c76bf95a8cb34c8d3e6ce1486d0
|
|
BLAKE2b-256 |
fa3f07430247e7f4aebf25af0aac5ba810e52f514bb03353b034471a2ee93a87
|