A protein language model that outputs amino acid sequence embeddings for use in clustering, classification, locality-sensitive hashing, and more.
Project description
ProtHash
A protein language model that outputs amino acid sequence embeddings for use in clustering, classification, locality-sensitive hashing, and more. Distilled from the ESMC family of models with deep comprehension of protein structure, ProtHash produces contextual embeddings that align in vector space according to the sequences' atomic structure. Trained on the SwissProt dataset to mimic the activations of its ESMC teacher model, ProtHash embeddings have near perfect similarity to ESMC embeddings but at a greatly reduced computational cost.
Key Features
-
Blazing fast and efficient: ProtHash uses as few as 1.5% of its ESMC teacher's total parameters to achieve near-perfect cosine similarity between the two embedding spaces.
-
Structurally-relevant: Structurally similar proteins will show up nearby in the embedding space enabling downstream tasks such as clustering, classification, and locality-sensitive hashing based on atomic structure.
-
Compatible with ESMC: ProtHash can output embeddings in its native or ESMC teacher's dimensionality - allowing it to serve as both a faster drop-in replacement for ESMC embeddings and a more efficient compressed representation.
-
Quantization-ready: With quantization-aware post-training, ProtHash allows you to quantize the weights of the model without losing similarity to the teacher's embedding space.
Pretrained Models
| Name | Context Length | Embedding Dimensionality | Attention Heads (Q/KV) | Encoder Layers | Total Params | Teacher Model | Teacher Dimensionality |
|---|---|---|---|---|---|---|---|
| andrewdalpino/ProtHash-384-Tiny | 2048 | 384 | 16/4 | 4 | 5M | esmc_300m | 960 |
| andrewdalpino/ProtHash-384 | 2048 | 384 | 16/4 | 10 | 11M | esmc_300m | 960 |
| andrewdalpino/ProtHash-512-Tiny | 2048 | 512 | 16/4 | 4 | 8.5M | esmc_600m | 1152 |
| andrewdalpino/ProtHash-512 | 2048 | 512 | 16/4 | 10 | 19M | esmc_600m | 1152 |
Pretrained Example
First, you'll need the prothash and esm packages installed into your environment. We recommend using a virtual environment such as Python's venv module to prevent version conflicts with any system packages.
pip install prothash esm
Then, load the weights from HuggingFace Hub, tokenize a protein sequence, and pass it to the model. ProtHash adopts the ESM tokenizer as it's amino acids tokenization scheme which consists of a vocabulary of 33 amino acid and special tokens. The output will be an embedding vector that can be used in downstream tasks such as comparing to other protein sequence embeddings, clustering, and near-duplicate detection.
import torch
from esm.tokenization import EsmSequenceTokenizer
from prothash.model import ProtHash
tokenizer = EsmSequenceTokenizer()
model_name = "andrewdalpino/ProtHash-512-Tiny"
model = ProtHash.from_pretrained(model_name)
sequence = input("Enter a sequence: ")
out = tokenizer(sequence, max_length=2048)
tokens = out["input_ids"]
# Input is a [1, T] tensor of token indices.
x = torch.tensor(tokens, dtype=torch.int64).unsqueeze(0)
# Output the sequence embedding in native dimensionality.
y_embed_native = model.embed_native(x).squeeze(0)
print(y_embed_native.shape)
# Output a drop-in replacement for the teacher's embeddings.
y_embed_teacher = model.embed_teacher(x).squeeze(0)
print(y_embed_teacher.shape)
References
- The UniProt Consortium, UniProt: the Universal Protein Knowledgebase in 2025, Nucleic Acids Research, 2025, 53, D609–D617.
- T. Hayes, et al. Simulating 500 million years of evolution with a language model, 2024.
- B. Zhang, et al. Root Mean Square Layer Normalization. 33rd Conference on Neural Information Processing Systems, NeurIPS 2019.
- J. Ainslie, et al. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints, Google Research, 2023.
- T. Kim, et al. Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation, 2021.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file prothash-0.1.1.tar.gz.
File metadata
- Download URL: prothash-0.1.1.tar.gz
- Upload date:
- Size: 10.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
da1e6b9c1cc508f543dd52dd3e19d1e06c1d8459ca955519d119e94279c7f605
|
|
| MD5 |
a6a64d199ac0532276ae04b7227a8ead
|
|
| BLAKE2b-256 |
5cddf9180c6a292c8fcdf2285402e3e789976b9110625cb3dae4ed7294cfd6bc
|
File details
Details for the file prothash-0.1.1-py3-none-any.whl.
File metadata
- Download URL: prothash-0.1.1-py3-none-any.whl
- Upload date:
- Size: 6.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d48c905a38304a5d123e2b6eb5e813c36f05ca5e44e19d84e0dc3fc5635fa137
|
|
| MD5 |
b3f6287b63f4bf6cadf9879ab67312db
|
|
| BLAKE2b-256 |
d8863a96dc428c2b487b279fd6a14ab1fb0fde70bb2d12a3f9d67928ca3a944b
|