Skip to main content

Protein family language models

Project description

ProFam logo

ProFam: Open-Source Protein Family Language Modelling for Fitness Prediction and Design

Python License: MIT PyPI version DOI

ProFam is an open-source toolkit for training, scoring, and generating protein sequences with protein family language models. It packages the ProFam-1 251M-parameter pfLM together with open training and inference workflows, a downloadable pretrained checkpoint, and an open dataset release for reproducible experimentation.

Installation

From PyPI

Install ProFam as a standard Python package:

uv pip install profam

or

pip install profam

From Source

If you want the full repository workflows, example data, and inference scripts:

git clone https://github.com/alex-hh/profam.git
cd profam
uv sync
profam download

Optional installs:

  • Development tooling: uv sync --group dev
  • FlashAttention 2: uv sync --extra flash-attn

If you run into CUDA or flash-attn issues, see Installation Details.

Quickstart

Verify the installed package

python -c "from profam import ProFam; print('ProFam ready')"

Download the pretrained model weights

The ProFam-1 model weights are hosted on Hugging Face and need to be downloaded before use (or they will be auto-downloaded on first use):

profam download

Python API

The recommended way to use ProFam programmatically:

from profam import ProFam

model = ProFam()  # loads checkpoint once (auto-downloads if needed)

# Generate sequences conditioned on family context
result = model.generate(
    prompt=["ACDEFGHIKLMNPQRSTVWY", "ACDEFGHIKLMNPQRSTVWF"],
    num_samples=10,
    top_p=0.95,
)
print(result.sequences)  # list of generated amino acid strings
print(result.scores)     # mean log-likelihood per sequence

# Score candidate sequences
result = model.score(
    sequences=["ACDEFGHIKLMNPQRSTVWY", "ACDEFGHIKLMNPQRSTVWF"],
    prompt=["ACDEFGHIKLMNPQRSTVWY"],  # conditioning context
)
print(result.scores)  # numpy array of mean log-likelihoods

# Iterative design loop
prompt = initial_sequences
for cycle in range(n_cycles):
    result = model.generate(prompt=prompt, num_samples=20, top_p=0.95)
    # ... evaluate with external tools ...
    prompt = initial_sequences + selected_sequences

CLI

profam generate -- --file_path family.fasta --num_samples 10
profam score -- --conditioning_fasta family.a3m --candidates_file variants.csv
profam download

Main Workflows

Workflow Purpose Command
Download checkpoint Fetch the pretrained ProFam-1 checkpoint profam download
Generate sequences Sample new sequences from family prompts profam generate -- --file_path ...
Score sequences Score candidate sequences with family context profam score -- --conditioning_fasta ...

Input Sequence Formats

ProFam supports:

  • Unaligned FASTA for standard protein sequence inputs
  • Aligned / MSA-style files such as A2M/A3M content with gaps and insertions

For profam-score-sequences, we recommend providing an aligned MSA file because sequence weighting is used to encourage diversity when subsampling prompt sequences. Even when aligned inputs are provided, the standard ProFam model converts them into unaligned gap-free sequences before the forward pass.

During preprocessing:

  • gaps (- and alignment-like .) are removed
  • lowercase insertions are converted to uppercase
  • U -> C and O -> K
  • remaining out-of-vocabulary characters map to [UNK] only when allow_unk=true

Training

Training is handled via Hydra configs and is intended for development from the source repository (not via pip-installed commands).

Run a lightweight example

configs/experiment/train_profam_example.yaml is configured to run on the bundled example data:

uv run python -m profam.train experiment=train_profam_example logger=null_logger

Train with the ProFam-Atlas dataset

Training data for ProFam can be downloaded from:

The default configuration in configs/train.yaml is compatible with the latest ProFam-Atlas release:

uv run python -m profam.train

Resources

Citation

If you use ProFam in your work, please cite the preprint:

@article{wells2025profam,
  title = {ProFam: Open-Source Protein Family Language Modelling for Fitness Prediction and Design},
  author = {Wells, Jude and Hawkins Hooker, Alex and Livne, Micha and Lin, Weining and Miller, David and Dallago, Christian and Bordin, Nicola and Paige, Brooks and Rost, Burkhard and Orengo, Christine and Heinzinger, Michael},
  journal = {bioRxiv},
  year = {2025},
  doi = {10.64898/2025.12.19.695431},
  url = {https://www.biorxiv.org/content/10.64898/2025.12.19.695431v1}
}

Installation Details

CPU-only installation

uv sync
uv pip install torch --index-url https://download.pytorch.org/whl/cpu

FlashAttention 2

We recommend installing FlashAttention 2 for faster scoring and generation. For training, it is strongly recommended because ProFam uses sequence packing with batch_size=1 and no padding.

If you need to train without Flash Attention, update the configuration to set data.pack_to_max_tokens=null.

uv sync --extra flash-attn
python -c "import flash_attn; print(flash_attn.__version__)"

Troubleshooting: conda fallback

If a matching flash-attn wheel is unavailable and a source build is required, this conda-based fallback is often the easiest route:

conda create -n pfenv python=3.11 -y
conda activate pfenv

conda install -c conda-forge ninja packaging -y
conda install -c nvidia cuda-toolkit=12.4 -y

pip install profam

# install a CUDA-enabled PyTorch build (adjust CUDA version/index-url to match your setup)
pip install torch==2.5.1+cu121 torchvision==0.20.1+cu121 --index-url https://download.pytorch.org/whl/cu121

pip install setuptools wheel packaging psutil numpy
pip install flash-attn==2.5.6 --no-build-isolation

python -c "import flash_attn; print(flash_attn.__version__)"

Development

We're using pre-commit to format code and pytest to run tests.

Pull requests will automatically have pre-commit and pytest run on them and will only be approved once these checks are all passing

Before submitting a pull request, run the checks locally with:

uv run --group dev pre-commit run --all-files

and

uv run --group dev pytest -k 'not example'

Pull requests adding complex new features or making any significant changes or additions should be accompanied with associated tests in the tests/ directory.

Concepts

Data loading

ProFam uses text memmap datasets for fast random access over large corpora:

  • profam/data/text_memmap_datasets.py: generic memory-mapped line access + index building (*.idx.{npy,info})
  • profam/data/builders/family_text_memmap_datasets.py: ProFam-Atlas-specific datasets built on top of the memmap layer

ProFam-Atlas on-disk format (.mapping / .sequences)

The ProFam-Atlas dataset is distributed as paired files:

  • *.mapping: family id + indices into one or more *.sequences files
    • Format:
      • Line 1: >FAMILY_ID
      • Line 2+: sequences_filename:idx0,idx1,idx2,...
    • Important: *.mapping files must not have a trailing newline at end-of-file.
  • *.sequences: FASTA-like accessions + sequences
    • Format (repeated):
      • >ACCESSION ...
      • SEQUENCE
    • Important: *.sequences files should have a final trailing newline.

See README_ProFam_atlas.md for examples and additional details.

How it’s loaded

At a high level, training loads one protein family at a time by:

  1. Reading a family record from MappingProteinFamilyMemmapDataset (a memmapped *.mapping dataset)
  2. Fetching the referenced sequences from SequencesProteinFamilyMemmapDataset (memmapped *.sequences files)
  3. Building a ProteinDocument and preprocessing it (see profam/data/processors/preprocessing.py)
  4. Encoding with ProFamTokenizer and forming batches (optionally with packing)

Converting FASTA → text memmap

If you have a directory of per-family FASTA files and want to create *.mapping / *.sequences files for training, see:

  • data_creation_scripts/fasta_to_text_memmap.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

profam-0.1.7.tar.gz (538.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

profam-0.1.7-py3-none-any.whl (155.5 kB view details)

Uploaded Python 3

File details

Details for the file profam-0.1.7.tar.gz.

File metadata

  • Download URL: profam-0.1.7.tar.gz
  • Upload date:
  • Size: 538.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for profam-0.1.7.tar.gz
Algorithm Hash digest
SHA256 d0443e85bf930e11c09ff1a21a64753faf3139693386831e50a119666c84f2de
MD5 d91989d379a79d3ca1ec132658b79dd4
BLAKE2b-256 0e459f78af886aa1a3468674427506c7bfc24e4670dd167f4ba8841d46efcf2a

See more details on using hashes here.

File details

Details for the file profam-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: profam-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 155.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for profam-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 9bc7b0cf22d38c08ca634581409856e652e188147ba647e17ac00581d82af7f8
MD5 afee3b5f7bdebab993811f926d171b52
BLAKE2b-256 652746d072d8073a07048243a1737d4664a6e2e04e80b6c202ec9c2c47dd1cf8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page