Skip to main content

Convenient Text-to-Text Training for Transformers

Project description

t2t-tuner

Convenient Text-to-Text Training for Transformers

pip install t2t-tuner

Requires PyTorch: either follow PyTorch installation instructions or use a PyTorch container.

Features

  • Easy training for text-to-text generation tasks
  • Training methods/features:
    • Supervised fine-tuning
    • Gradient checkpointing
    • Model parallelism
    • Soft prompt tuning (based on this paper)
    • Freeze encoder/decoder/embeddings
    • Print model summary
  • Based on the wonderful HuggingFace Transformers library. Tested on T5-based models. In theory, it should work with other models that support AutoModelForSeq2SeqLM as well

This work is based on HuggingFace's run_translation.py script for text-to-text generation tasks. I decided I want a more more convenient interface for training and inferencing, along with access to things like gradient checkpointing and model parallel to fit larger models - these are already in the HuggingFace library but not exposed in the script. I also added in some features that I wanted (prompt tuning, model summary) and wrapped it as a library that can be pip installed.

Examples

Simple snippet:

import t2t

trainer_arguments = t2t.TrainerArguments(model_name_or_path="t5-small",
                                         train_file=YOUR_DATASET)

trainer = t2t.Trainer(arguments=trainer_arguments)

# train without validation
trainer.train(valid=False)

For more concrete examples, check out the notebooks linked below:

Data format:

{"translation": {"s": "TEXT", "t": "LABEL"}}
  • The format of data is json-lines, following HuggingFace original script. Each example is one line
  • Define the source and target IDs in TrainingArguments.source_id and TrainingArguments.target_id (defaults to s and t)
  • Include the prefix in the data file, or define the prefix to prepend to the text in TrainingArguments.prefix
  • Example notebook for data preprocessing from CSV file

Training Large Models

Using this library, you can fine-tune the T5 11b checkpoints quite easily with the following settings:

  • Batch size 1 + gradient accumulation to make up to whatever batch size you need
  • Batch size of 8 is possible with gradient checkpointing, but doesn't improve the speed
  • About 128GB of VRAM: 8x 16GB or 4x 32GB GPU (such as V100)
  • FP32 (no need for mixed precision)
    • FP16 would actually be better, but the pretrained T5 checkpoints don't play well with FP16 as the existing activations are too large (github issue tracking)

Model parallel T5-11b

Note that depending on your system, the loading time for the checkpoint (46GB) can be quite long.

Development

Building Package

python3 -m pip install --upgrade build twine
python3 -m build
python3 -m twine upload dist/*

Disclaimers

This library as developed as a personal project for my own use. Please feel free to fork or use it for your own purposes as well. I will not take responsibility for any mishaps that occur as a result of this library's usage.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

t2t-tuner-0.0.7.tar.gz (10.2 kB view hashes)

Uploaded Source

Built Distribution

t2t_tuner-0.0.7-py3-none-any.whl (8.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page