Train a transformer model with the command line
Project description
Overview
The sequifier package enables:
- the extraction of sequences for training from a standardised format
- the configuration and training of a transformer classification model
- inference on data with a trained model each of these steps is explained below.
Preprocessing of data into sequences for training
The preprocessing step is specifically designed for scenarios where for long series of events, the prediction of the next event from the previous N events is of interest. In cases of sequences where only the last interaction is a valid target,the preprocessing step does not apply.
This step presupposes input data with three columns: sequenceId, itemId and timesort. sequenceId and itemId identify user/item interaction, and the timesort column must provide values that enable sequential sorting. Often this will simply be a timestamp.
The data can then be processed into training, validation and testing datasets of all valid subsequences in the original data with the command:
sequifier.py --preprocess --config_path=[CONFIG PATH] --project_path=[PROJECT PATH]
The config path specifies the path to the preprocessing config and the project path the path to the (preferably empty) folder the output files of the different steps are written to.
The default config can be found on this path:
configs/preprocess/default.yaml
Configuring and training the sequence classification model
The training step is executed with the command:
sequifier.py --train --config_path=[CONFIG PATH] --project_path=[PROJECT PATH]
If the data on which the model is trained comes from the preprocessing step, the flag
--on-preprocessed
should also be added.
If the training data does not come from the preprocessing step, both train and validation data have to take the form of a csv file with the columns:
sequenceId, seq_length, seq_length-1,...,1, target
The training step is configured using the config. The two default configs can be found here:
configs/train/default.yaml
configs/train/default-on-preprocessed.yaml
Inferring on test data using the trained model
Inference is done using the command:
sequifier.py --infer --config_path=[CONFIG PATH] --project_path=[PROJECT PATH]
and configured using a config file. The default version can be found here:
configs/infer/default.yaml
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for sequifier-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ab54e6aebd491456715bf505729ff2d1d619c2347429e8f7d20d674e5a9031e9 |
|
MD5 | 380be2520995cf0b904561be0dc7b63f |
|
BLAKE2b-256 | c1a15cf7989274922226dc2b0a24a30153d8995247370c7351e99b76b8e74a3a |