Contrastive Learning for Sequence and Structure - co-embeds protein sequences and structures
Project description
CLSS: Contrastive learning unites sequence and structure in a global representation of protein space
Paper: https://www.biorxiv.org/content/10.1101/2025.09.05.674454.full.pdf
DOI: https://doi.org/10.1101/2025.09.05.674454
GitHub repository: https://github.com/guyyanai/CLSS
Interactive viewer: https://gabiaxel.github.io/clss-viewer/
Abstract
Amino acid sequence dictates the three-dimensional structure and biological function of proteins. Yet, despite decades of research, our understanding of the interplay between sequence and structure is incomplete. To meet this challenge, we introduce Contrastive Learning Sequence-Structure (CLSS), an AI-based contrastive learning model trained to co-embed sequence and structure information in a self-supervised manner. We trained CLSS on large and diverse sets of protein building blocks called domains. CLSS represents both sequences and structures as vectors in the same high-dimensional space, where distance relates to sequence-structure similarity. Thus, CLSS provides a natural way to represent the protein universe, reflecting evolutionary relationships, as well as structural changes. We find that CLSS refines expert knowledge about the global organization of protein space, and highlights transitional forms that resist hierarchical classification. CLSS reveals linkage between domains of seemingly separate lineages, thereby significantly improving our understanding of evolutionary design.
TL;DR
CLSS is a self-supervised, two-tower contrastive model that co-embeds protein sequences and structures into a shared 32‑D space, enabling unified mapping of protein space across modalities.
Key ideas
- Two-tower architecture: sequence tower (ESM2‑like, ~35M params) co-trained; structure tower (ESM3) kept frozen; both feed 32‑D L2‑normalized adapters.
- Segment-aware training: contrastive pairs match full-domain structures with random sequence sub-segments (≥10 aa) to encode contextual compatibility.
- Unified embeddings: sequences, structures, and subsequences align in a single space; distances track ECOD hierarchy and reveal cross-fold relationships.
- Scale & efficiency: ~36M trainable params, compact embeddings (32‑D) supporting efficient inference and training.
- Resources: code + weights, and a public CLSS viewer for exploration.
See paper for full details, datasets, ablations, and comparisons.
Architecture
Figure 1: Overview of training, validating, and testing of CLSS to create unified maps of protein sequence and structure space. (A) Overview of the two-tower CLSS architecture. On the left is a structure tower based on the frozen, pre-trained ESM3 model (light blue) followed by a trained (yellow) adapter that averages, reduces dimension, and normalizes the embedding. On the right is the trained CLSS sequence tower, build upon a pre-trained ESM2 model, and its adapter network (yellow). The networks are trained using contrastive loss on batches of randomly chosen structures and sequence segments from the ECOD-AF2 domain database. Labels from a hierarchical classification were not using during training in any way. Once trained, we calculate the embeddings of the structures, sequences, and sequence segments from Datasets 1 and 2 using CLSS (B) and other PLMs (C). (D) Dimensionality reduction by t-SNE was used to create visual maps of protein space (upper images). Pairwise distance distributions (lower images) were calculated from the embeddings directly, rather than the t-SNE reduced space.
Visualization
CLSS embeddings capture the global organization of protein space, revealing evolutionary relationships and structural similarities across diverse protein domains.
Figure 2: CLSS embedding maps of ECOD domains (Dataset 1). For each domain, we calculate the embeddings by three modalities – structure, sequence, and a random sequence segment – and then compute a t-SNE projection of the embeddings. Each point represents one of the modalities of a domain colored according to the label of its ECOD architecture. (A) An overlay of all three modalities. Sequences are marked by circles, structures by ‘+’, and random sequence segments by ‘x’. (B) Structure embeddings. (C) sequence segment embeddings. (D) Sequence embeddings. We find that the maps of all three modalities are very similar to each other, with the sequence (D) and structure (B) embeddings being the closest. This shows that CLSS successfully injected structure information into the sequence modality. The global organization of the CLSS embedding space positions domains with the same ECOD architecture, and even the same structure class, near each other.
Quick Start
Installation
pip install clss-model
Examples
Complete examples are available in the examples/ directory:
-
examples/training/- Full training pipelinetrain.py- Main training script with PyTorch Lightningdataset.py- ECOD dataset loading and preprocessingargs.py- Command-line argument parsinginfra.py- Infrastructure setup (distributed training, logging)
-
examples/inference/- Inference and embeddinginfer.py- Protein sequence and structure embeddingsample-pdbs/- Example PDB files for testing
-
examples/interactive-map/- Interactive visualizationapp.py- Complete pipeline from data to interactive HTML visualizationmapper.py- Plotly-based interactive scatter plot creationdataset.py- Multi-modal data loading (FASTA/PDB)embeddings.py- CLSS model inference and embedding generationdim_reducer.py- t-SNE dimensionality reduction
Data
- ECOD‑AF2 domains (training/validation set) - Available in
datasets/training/ - F40-large-folds (Dataset 1 from paper) - Available in
datasets/F40-large-folds/- Contains all ECOD-PDB-F40 domains in folds with more than 50 domains
Citation
If you use this repository, please cite:
@article{Yanai2025CLSS,
title={Contrastive learning unites sequence and structure in a global representation of protein space},
author={Yanai, Guy and Axel, Gabriel and Longo, Liam M. and Ben-Tal, Nir and Kolodny, Rachel},
journal={bioRxiv},
year={2025},
doi={10.1101/2025.09.05.674454},
url={https://www.biorxiv.org/content/10.1101/2025.09.05.674454.full.pdf}
}
Acknowledgments & Contact
- See the paper for funding and acknowledgments.
- Correspondence: llongo@elsi.jp, bental@tauex.tau.ac.il, trachel@cs.haifa.ac.il.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file clss_model-0.3.8.tar.gz.
File metadata
- Download URL: clss_model-0.3.8.tar.gz
- Upload date:
- Size: 12.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6f0f231c3163e4a90af3e2fa7f461244a530f9b46a1ff94be5ec5269f7ab5130
|
|
| MD5 |
dc497dce166c3e28b21600c3e2eff725
|
|
| BLAKE2b-256 |
6ae788ee30d3b52529e68443b56bf1c9d01b5cbc6746a339a3ea8818fb026c59
|
File details
Details for the file clss_model-0.3.8-py3-none-any.whl.
File metadata
- Download URL: clss_model-0.3.8-py3-none-any.whl
- Upload date:
- Size: 15.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
860db77f1cb4f2b21fad1c5a0844d16fdfeb513ce0081c9466dd00f34af66d40
|
|
| MD5 |
02e494e0000d4d6b6a3876a34175f660
|
|
| BLAKE2b-256 |
3bbf828cdf9c0378705ce7ff643cfb49d517c3a98eaff67be6ffae075132d97e
|