Skip to main content

Train a transformer model with the command line

Project description

What is sequifier?

Sequifier makes training and inference of powerful causal transformer models fast and trustworthy.

The process looks like this:

Value Proposition

Implementing a model from scratch takes time, and there are a surprising number of aspects to consider. The idea is: why not do it once, make it configurable, and then use the same implementation across domains and datasets.

This gives us a number of benefits:

  • rapid prototyping
  • configurable architecture
  • trusted implementation (you can't create bugs inadvertedly)
  • standardized logging
  • native multi-gpu support
  • native multi-core preprocessing
  • scales to datasets larger than RAM
  • hyperparameter search
  • can be used for prediction, generation and embeddding on/of arbitrary sequences

The only requirement is having sequifier installed, and having input data in the right format.

The Five Commands

There are five standalone commands within sequifier: make, preprocess, train, infer and hyperparameter-search. make sets up a new sequifier project in a new folder, preprocess preprocesses the data from the input format into subsequences of a fixed length, train trains a model on the preprocessed data, infer generates outputs from data in the preprocessed format and outputs it in the initial input format, and hyperparameter-search executes multiple training runs to find optimal configurations.

There are documentation pages for each command, except make:

Other Materials

To get the full auto-generated documentation, visit sequifier.com

If you want to first get a more specific understanding of the transformer architecture, have a look at the Wikipedia article.

If you want to see an end-to-end example on very simple synthetic data, check out this this notebook.

Structure of a Sequifier Project

Sequifier is designed with a specific folder structure in mind:

YOUR_PROJECT_NAME/
├── configs/
│   ├── preprocess.yaml
│   ├── train.yaml
│   └── infer.yaml
├── data/
│   └── (Place your CSV/Parquet files here)
├── outputs/
└── logs/

The sequifier commands should typically be run in the project root.

Within YOUR_PROJECT_NAME, you can also add other folders for additional steps, such as notebooks or scripts for pre- or postprocessing, and analysis, visualizations or evals for files you generate in other, manual steps.

Data Transformations in Sequifier

Let's start with the data format expected by sequifier. The basic data format that is used as input to the library takes the following form:

sequenceId itemPosition column1 column2 ...
0 0 "high" 12.3 ...
0 1 "high" 10.2 ...
... ... ... ... ...
1 0 "medium" 20.6 ...
... ... ... ... ...

The two columns "sequenceId" and "itemPosition" have to be present, and then there must be at least one feature column. There can also be many feature columns, and these can be categorical or real valued.

Data of this input format can be transformed into the format that is used for model training and inference using sequifier preprocess, which takes this form:

sequenceId subsequenceId startItemPosition columnName [Subsequence Length] [Subsequence Length - 1] ... 0
0 0 0 column1 "high" "high" ... "low"
0 0 0 column2 12.3 10.2 ... 14.9
... ... ... ... ... ... ... ...
1 0 15 column1 "medium" "high" ... "medium"
1 0 15 column2 20.6 18.5 ... 21.6
... ... ... ... ... ... ... ...

On inference, the output is returned in the library input format, introduced first.

sequenceId itemPosition column1 column2 ...
0 963 "medium" 8.9 ...
0 964 "low" 6.3 ...
... ... ... ... ...
1 732 "medium" 14.4 ...
... ... ... ... ...

Complete Example of Training and Inferring a Transformer Model

Once you have your data in the input format described above, you can train a transformer model in a couple of steps on them.

  1. create a conda environment with python >=3.10 and <=3.13 activate and run
pip install sequifier
  1. To create the project folder with the config templates in the configs subfolder, run
sequifier make YOUR_PROJECT_NAME
  1. cd into the YOUR_PROJECT_NAME folder, create a data folder and add your data and adapt the config file preprocess.yaml in the configs folder to take the path to the data
  2. run
sequifier preprocess
  1. the preprocessing step outputs a metadata config at configs/metadata_configs/[FILE NAME]. Adapt the metadata_config_path parameter in train.yaml and infer.yaml to the path configs/metadata_configs/[FILE NAME]
  2. Adapt the config file train.yaml to specify the transformer hyperparameters you want and run
sequifier train
  1. adapt data_path in infer.yaml to one of the files output in the preprocessing step
  2. run
sequifier infer
  1. find your predictions at [PROJECT ROOT]/outputs/predictions/sequifier-default-best-predictions.csv

Other Features

Embedding Model

While Sequifier's primary use case is training predictive or generative causal transformer models, it also supports the export of embedding models.

Configuration:

  • Training: Set export_embedding_model: true in the training config.
  • Inference: Set model_type: embedding in the inference config.

Technical Details: The generated embedding has dimensionality dim_model and consists of the final hidden state (activations) of the transformer's last layer corresponding to the last token in the sequence. Because the model is trained on a causal objective, this is a "forward-looking" embedding: it is optimized to compress the sequence history into a representation that maximizes information about the future state of the data.

Distributed Training

Sequifier supports distributed training using torch DistributedDataParallel. To make use of multi gpu support, the write format of the preprocessing step must be set to 'pt' and merge_output must be set to false in the preprocessing config.

System Requirements

Tiny transformer models on little data can be trained on CPU. Bigger ones require an Nvidia GPU with a compatible cuda version installed.

Sequifier currently runs on MacOS and Ubuntu.

Citation

Please cite with:

@software{sequifier_2025,
  author = {Luithlen, Leon},
  title = {sequifier - causal transformer models for multivariate sequence modelling},
  year = {2025},
  publisher = {GitHub},
  version = {v1.0.0.1},
  url = {https://github.com/0xideas/sequifier}
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sequifier-1.0.0.1.tar.gz (90.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sequifier-1.0.0.1-py3-none-any.whl (98.1 kB view details)

Uploaded Python 3

File details

Details for the file sequifier-1.0.0.1.tar.gz.

File metadata

  • Download URL: sequifier-1.0.0.1.tar.gz
  • Upload date:
  • Size: 90.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for sequifier-1.0.0.1.tar.gz
Algorithm Hash digest
SHA256 5afd3cddc638738fbccea1ab76e9e788dddc34438f89ddb8a8d978176e74e405
MD5 26e105bd91a26a4243bc8249e0f5cf81
BLAKE2b-256 4177841b57328ef5ef4e004d1429385e052ea4215490a1cc75ad1cc6c33ba14e

See more details on using hashes here.

File details

Details for the file sequifier-1.0.0.1-py3-none-any.whl.

File metadata

  • Download URL: sequifier-1.0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 98.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for sequifier-1.0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 77e138479304d6e434909452504962f68d43d2c63e04a66eb9dde894d6605563
MD5 570f3e59ae355cd9e49a49d1a8cbd770
BLAKE2b-256 192eca3ce092bd54620980cf02c92409b616ca3c74aee1806e5ee8488124f3b9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page