Protein family language models
Project description
ProFam is an open-source toolkit for training, scoring, and generating protein sequences with protein family language models. It packages the ProFam-1 251M-parameter pfLM together with open training and inference workflows, a downloadable pretrained checkpoint, and an open dataset release for reproducible experimentation.
Installation
From PyPI
Install ProFam as a standard Python package:
uv pip install profam
or
pip install profam
From Source
If you want the full repository workflows, example data, and inference scripts:
git clone https://github.com/alex-hh/profam.git
cd profam
uv sync
profam download
Optional installs:
- Development tooling:
uv sync --group dev - FlashAttention 2:
uv sync --extra flash-attn
If you run into CUDA or flash-attn issues, see Installation Details.
Quickstart
Verify the installed package
python -c "from profam import ProFam; print('ProFam ready')"
Download the pretrained model weights
The ProFam-1 model weights are hosted on Hugging Face and need to be downloaded before use (or they will be auto-downloaded on first use):
profam download
Python API
The recommended way to use ProFam programmatically:
from profam import ProFam
model = ProFam() # loads checkpoint once (auto-downloads if needed)
# Generate sequences conditioned on family context
result = model.generate(
prompt=["ACDEFGHIKLMNPQRSTVWY", "ACDEFGHIKLMNPQRSTVWF"],
num_samples=10,
top_p=0.95,
)
print(result.sequences) # list of generated amino acid strings
print(result.scores) # mean log-likelihood per sequence
# Score candidate sequences
result = model.score(
sequences=["ACDEFGHIKLMNPQRSTVWY", "ACDEFGHIKLMNPQRSTVWF"],
prompt=["ACDEFGHIKLMNPQRSTVWY"], # conditioning context
)
print(result.scores) # numpy array of mean log-likelihoods
# Iterative design loop
prompt = initial_sequences
for cycle in range(n_cycles):
result = model.generate(prompt=prompt, num_samples=20, top_p=0.95)
# ... evaluate with external tools ...
prompt = initial_sequences + selected_sequences
CLI
profam generate -- --file_path family.fasta --num_samples 10
profam score -- --conditioning_fasta family.a3m --candidates_file variants.csv
profam download
Main Workflows
| Workflow | Purpose | Command |
|---|---|---|
| Download checkpoint | Fetch the pretrained ProFam-1 checkpoint |
profam download |
| Generate sequences | Sample new sequences from family prompts | profam generate -- --file_path ... |
| Score sequences | Score candidate sequences with family context | profam score -- --conditioning_fasta ... |
Input Sequence Formats
ProFam supports:
- Unaligned FASTA for standard protein sequence inputs
- Aligned / MSA-style files such as A2M/A3M content with gaps and insertions
For profam-score-sequences, we recommend providing an aligned MSA file because sequence weighting is used to encourage diversity when subsampling prompt sequences. Even when aligned inputs are provided, the standard ProFam model converts them into unaligned gap-free sequences before the forward pass.
During preprocessing:
- gaps (
-and alignment-like.) are removed - lowercase insertions are converted to uppercase
U -> CandO -> K- remaining out-of-vocabulary characters map to
[UNK]only whenallow_unk=true
Training
Training is handled via Hydra configs and is intended for development from the source repository (not via pip-installed commands).
Run a lightweight example
configs/experiment/train_profam_example.yaml is configured to run on the bundled example data:
uv run python -m profam.train experiment=train_profam_example logger=null_logger
Train with the ProFam-Atlas dataset
Training data for ProFam can be downloaded from:
The default configuration in configs/train.yaml is compatible with the latest ProFam-Atlas release:
uv run python -m profam.train
Resources
Citation
If you use ProFam in your work, please cite the preprint:
@article{wells2025profam,
title = {ProFam: Open-Source Protein Family Language Modelling for Fitness Prediction and Design},
author = {Wells, Jude and Hawkins Hooker, Alex and Livne, Micha and Lin, Weining and Miller, David and Dallago, Christian and Bordin, Nicola and Paige, Brooks and Rost, Burkhard and Orengo, Christine and Heinzinger, Michael},
journal = {bioRxiv},
year = {2025},
doi = {10.64898/2025.12.19.695431},
url = {https://www.biorxiv.org/content/10.64898/2025.12.19.695431v1}
}
Installation Details
CPU-only installation
uv sync
uv pip install torch --index-url https://download.pytorch.org/whl/cpu
FlashAttention 2
We recommend installing FlashAttention 2 for faster scoring and generation. For training, it is strongly recommended because ProFam uses sequence packing with batch_size=1 and no padding.
If you need to train without Flash Attention, update the configuration to set data.pack_to_max_tokens=null.
uv sync --extra flash-attn
python -c "import flash_attn; print(flash_attn.__version__)"
Troubleshooting: conda fallback
If a matching flash-attn wheel is unavailable and a source build is required, this conda-based fallback is often the easiest route:
conda create -n pfenv python=3.11 -y
conda activate pfenv
conda install -c conda-forge ninja packaging -y
conda install -c nvidia cuda-toolkit=12.4 -y
pip install profam
# install a CUDA-enabled PyTorch build (adjust CUDA version/index-url to match your setup)
pip install torch==2.5.1+cu121 torchvision==0.20.1+cu121 --index-url https://download.pytorch.org/whl/cu121
pip install setuptools wheel packaging psutil numpy
pip install flash-attn==2.5.6 --no-build-isolation
python -c "import flash_attn; print(flash_attn.__version__)"
Development
We're using pre-commit to format code and pytest to run tests.
Pull requests will automatically have pre-commit and pytest run on them and will only be approved once these checks are all passing
Before submitting a pull request, run the checks locally with:
uv run --group dev pre-commit run --all-files
and
uv run --group dev pytest -k 'not example'
Pull requests adding complex new features or making any significant changes or additions should be accompanied with associated tests in the tests/ directory.
Concepts
Data loading
ProFam uses text memmap datasets for fast random access over large corpora:
profam/data/text_memmap_datasets.py: generic memory-mapped line access + index building (*.idx.{npy,info})profam/data/builders/family_text_memmap_datasets.py: ProFam-Atlas-specific datasets built on top of the memmap layer
ProFam-Atlas on-disk format (.mapping / .sequences)
The ProFam-Atlas dataset is distributed as paired files:
*.mapping: family id + indices into one or more*.sequencesfiles- Format:
- Line 1:
>FAMILY_ID - Line 2+:
sequences_filename:idx0,idx1,idx2,...
- Line 1:
- Important:
*.mappingfiles must not have a trailing newline at end-of-file.
- Format:
*.sequences: FASTA-like accessions + sequences- Format (repeated):
>ACCESSION ...SEQUENCE
- Important:
*.sequencesfiles should have a final trailing newline.
- Format (repeated):
See README_ProFam_atlas.md for examples and additional details.
How it’s loaded
At a high level, training loads one protein family at a time by:
- Reading a family record from
MappingProteinFamilyMemmapDataset(a memmapped*.mappingdataset) - Fetching the referenced sequences from
SequencesProteinFamilyMemmapDataset(memmapped*.sequencesfiles) - Building a
ProteinDocumentand preprocessing it (seeprofam/data/processors/preprocessing.py) - Encoding with
ProFamTokenizerand forming batches (optionally with packing)
Converting FASTA → text memmap
If you have a directory of per-family FASTA files and want to create *.mapping / *.sequences files for training,
see:
data_creation_scripts/fasta_to_text_memmap.py
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file profam-0.1.4.tar.gz.
File metadata
- Download URL: profam-0.1.4.tar.gz
- Upload date:
- Size: 538.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ae112b1e4373c226c26aca2deddf2fbb3473b5cb4331ed324b94abb2278eecc3
|
|
| MD5 |
16f554b5b6389d31cd5e676dd8032b18
|
|
| BLAKE2b-256 |
1f57ecc1484be26290d41a6aa19946fa56d1e7b3e22f5eac34e6483f00daa290
|
File details
Details for the file profam-0.1.4-py3-none-any.whl.
File metadata
- Download URL: profam-0.1.4-py3-none-any.whl
- Upload date:
- Size: 155.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7fd0d8a0e5db1bc3463bfd98d8ab182adb5470bcf10962810544f539da5843dd
|
|
| MD5 |
cf00974d2ada4ec82cf83ffe01987439
|
|
| BLAKE2b-256 |
d0f31c3fbf114b699d40a996e94dc25e24f8886a6668d5e8a3f5c270e2f80608
|