Train a transformer model with the command line
Project description
What is sequifier?
Sequifier is the library that make prototyping autoregressive transformer models for sequence modelling easy, reliable and comparable.
The process looks like this:
Motivation
Researchers, data scientists and ml scientists can take their sequential data sets, transform them into a standardized format, and from then use sequifier and configuration files to develop a model for these sequential data. These models can be applied to a test set, and be used to extrapolate these sequences through autoregression for an arbitrary number of steps. This should enable much faster development and evaluation cycles of generative transformer models across domains.
Importantly, sequifier works for an arbitrary number of categorical and real valued input and output columns, and can therefore represent a large set of possible mappings from inputs to outputs. The input and output columns do not have to be identical.
The standardized implementation of a decoder-only autorgressive transformer saves the work of implementing this model and the workflows around it repeatedly, across different domains and data sets, thereby reducing duplicate work and the probability of bugs and compromised results.
The standardized configuration enables easier experimentation and experiment tracking, and, if results are shared, an ever-improving basis for decision making on the initial configuration when applying the transformer architecture to a new problem.
Overall, it should be possible, even for non-experts in machine learning, to develop an initial prototype for a transformer model for a new domain. If the results are promising, it might become necessary to implement architecture variants that fall outside the scope of sequifier, but with a much cheaper (in terms of time and effort) initial exploration, many more potential application domains can be investigated.
Sequifier can also be used to train and infer forward-looking embedding models. These models output the activations of the last shared layer of the transformer, which encapsulate the information contained by the sequence so far that is useful in predicting the next time step.
Data Formats
The basic data format that is used as input to the library takes the following form:
| sequenceId | itemPosition | column1 | column2 | ... |
|---|---|---|---|---|
| 0 | 0 | "high" | 12.3 | ... |
| 0 | 1 | "high" | 10.2 | ... |
| ... | ... | ... | ... | ... |
| 1 | 0 | "medium" | 20.6 | ... |
| ... | ... | ... | ... | ... |
The two columns "sequenceId" and "itemPositions" have to be present, and then there must be at least one feature column. There can also be many feature columns, and these can be categorical or real valued.
Data of this input format can be transformed into the format that is used for model training and inference, which takes this form:
| sequenceId | subsequenceId | startItemPosition | columnName | [Subsequence Length] | [Subsequence Length - 1] | ... | 0 |
|---|---|---|---|---|---|---|---|
| 0 | 0 | 0 | column1 | "high" | "high" | ... | "low" |
| 0 | 0 | 0 | column2 | 12.3 | 10.2 | ... | 14.9 |
| ... | ... | ... | ... | ... | ... | ... | ... |
| 1 | 0 | 15 | column1 | "medium" | "high" | ... | "medium" |
| 1 | 0 | 15 | column2 | 20.6 | 18.5 | ... | 21.6 |
| ... | ... | ... | ... | ... | ... | ... | ... |
On inference, the output is returned in the library input format, introduced first.
| sequenceId | itemPosition | column1 | column2 | ... |
|---|---|---|---|---|
| 0 | 963 | "medium" | 8.9 | ... |
| 0 | 964 | "low" | 6.3 | ... |
| ... | ... | ... | ... | ... |
| 1 | 732 | "medium" | 14.4 | ... |
| ... | ... | ... | ... | ... |
There are four standalone commands within sequifier: make, preprocess, train and infer. make sets up a new sequifier project in a new folder, preprocess preprocesses the data from the input format into subsequences of a fixed length, train trains a model on the preprocessed data, and infer generates outputs from data in the preprocessed format and outputs it in the initial input format.
The input data can be a single csv or parquet file, or a folder of csv or parquet files. The preprocessing output can be a csv or parquet file per split, or a folder of multiple torch tensor (pt) files per split. The training step does not output any data files (it outputs model files and logs). The inference output can be a single csv or parquet file, or a folder of csv and parquet files. In general, it is recommended to store every step as a single file if the initial input is a single file, and a folder of files if the initial data is a folder of files. For the folder "flow", the preprocessing step write format has to be "pt".
Other materials
To get more details on the specific configuration options, go to these docs page.
If you want to first get a more specific understanding of the transformer architecture, have a look at the Wikipedia article.
If you want to see a benchmark on a small synthetic dataset with 10k cases, agains a random forest, an xgboost model and a logistic regression, check out this notebook.
Complete example how to build and apply a transformer sequence classifier with sequifier
- create a conda environment with python >=3.9 activate and run
pip install sequifier
- To create the project folder with the config templates in the configs subfolder, run
sequifier make YOUR_PROJECT_NAME
- cd into the
YOUR_PROJECT_NAMEfolder, create adatafolder and add your data and adapt the config filepreprocess.yamlin the configs folder to take the path to the data - run
sequifier preprocess
- the preprocessing step outputs a "data driven config" at
configs/ddconfigs/[FILE NAME]. It contains the number of classes found in the data, a map of classes to indices and the oaths to train, validation and test splits of data. Adapt thedd_configparameter intrain.yamlandinfer.yamlin to the pathconfigs/ddconfigs/[FILE NAME] - Adapt the config file
train.yamlto specify the transformer hyperparameters you want and run
sequifier train
- adapt
data_pathininfer.yamlto one of the files output in the preprocessing step - run
sequifier infer
- find your predictions at
[PROJECT PATH]/outputs/predictions/sequifier-default-best-predictions.csv
More detailed explanations of the three steps
Preprocessing of data into sequences for training
sequifier preprocess --config_path=[CONFIG PATH]
The config path specifies the path to the preprocessing config and the project path the path to the (preferably empty) folder the output files of the different steps are written to.
The default config can be found on this path:
Configuring and training the sequence classification model
The training step is executed with the command:
sequifier train --config_path=[CONFIG PATH]
If the data on which the model is trained DOES NOT come from the preprocessing step, the flag
--on-unprocessed
should be added.
If the training data does not come from the preprocessing step, both train and validation data have to take the form of a csv file with the columns "sequenceId", "subsequenceId", "inputCol", [SEQ LENGTH], [SEQ LENGTH - 1],...,"1", "0". You can find an example of the preprocessing input data at documentation/example_inputs/training_input.csv
The training step is configured using the config. The two default configs can be found here:
depending on whether the preprocessing step was executed.
Inferring on test data using the trained model
Inference is done using the command:
sequifier infer --config_path=[CONFIG PATH]
and configured using a config file. The default version can be found here:
Distributed Training
Sequifier supports distributed training using torch DistributedDataParallel. To make use of multi gpu support, the write format of the preprocessing step must be set to 'pt'.
Citation
Please cite with:
@software{sequifier_2025,
author = {Luithlen, Leon},
title = {sequifier - autoregressive transformer models for multivariate sequence modelling},
year = {2025},
publisher = {GitHub},
version = {0.6.2.8},
url = {https://github.com/0xideas/sequifier}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sequifier-0.9.1.0.tar.gz.
File metadata
- Download URL: sequifier-0.9.1.0.tar.gz
- Upload date:
- Size: 84.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3e5d6611d2314f71021af6eb10b11f9f0cf002d84330368f246b0c6ffc32cc6b
|
|
| MD5 |
c682c4b220e482dc724ca89c5b20791a
|
|
| BLAKE2b-256 |
1f9e36dd49617d78556b7713467685206efb265a42d17b64038f43567eaa24ce
|
File details
Details for the file sequifier-0.9.1.0-py3-none-any.whl.
File metadata
- Download URL: sequifier-0.9.1.0-py3-none-any.whl
- Upload date:
- Size: 91.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1de49d25bb74499a21c11c802af353c92ec35bd853cd0937c2c94eee5cdda864
|
|
| MD5 |
6f5fe93b2bd382237e6c0c91de7d7618
|
|
| BLAKE2b-256 |
3fe1c3d718c46febc2095f76db60d22bde204161a7ec905f80e973a1e4b3bf39
|