A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.
Project description
A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.
FinetuningScheduler is simple to use yet powerful, offering a number of features that facilitate model research and exploration:
- easy specification of flexible fine-tuning schedules with explicit or regex-based parameter selection
- implicit schedules for initial/naive model exploration
- explicit schedules for performance tuning, fine-grained behavioral experimentation and computational efficiency
- automatic restoration of best per-phase checkpoints driven by iterative application of early-stopping criteria to each fine-tuning phase
- composition of early-stopping and manually-set epoch-driven fine-tuning phase transitions
Setup
Step 0: Install from PyPI
pip install finetuning-scheduler
Step 1: Import the FinetuningScheduler callback and start fine-tuning!
from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler
trainer = Trainer(callbacks=[FinetuningScheduler()])
Get started by following the Fine-Tuning Scheduler introduction which includes a CLI-based example or by following the notebook-based Fine-Tuning Scheduler tutorial.
Examples
Scheduled Fine-Tuning For SuperGLUE
Continuous Integration
Fine-Tuning Scheduler is rigorously tested across multiple CPUs, GPUs and against major Python and PyTorch versions. Each Fine-Tuning Scheduler minor release (major.minor.patch) is paired with a PyTorch Lightning minor release (e.g. Fine-Tuning Scheduler 0.2 depends upon PyTorch Lightning 1.7).
To ensure maximum stability, the latest PyTorch Lightning patch release fully tested with Fine-Tuning Scheduler is set as a maximum dependency in Fine-Tuning Scheduler's requirements.txt (e.g. <= 1.6.1). If you'd like to test a specific PyTorch Lightning patch version greater than that currently in Fine-Tuning Scheduler's requirements.txt, it will likely work but you should install Fine-Tuning Scheduler from source and update the requirements.txt as desired.
Current build statuses for Fine-Tuning Scheduler
System / PyTorch ver | 1.9 (min. req.) | 1.12 (latest) |
---|---|---|
Linux py3.9 [GPUs**] | - | |
Linux py3.{7,9} | ||
OSX py3.{7,9} | ||
Windows py3.{7,9} |
- ** tests run on two RTX 2070s
Community
Fine-Tuning Scheduler is developed and maintained by the community in close communication with the PyTorch Lightning team. Thanks to everyone in the community for their tireless effort building and improving the immensely useful core PyTorch Lightning project.
PR's welcome! Please see the contributing guidelines (which are essentially the same as PyTorch Lightning's).
Citing Fine-Tuning Scheduler
Please cite:
@misc{Dan_Dale_2022_6463952,
author = {Dan Dale},
title = {{Fine-Tuning Scheduler}},
month = Feb,
year = 2022,
doi = {10.5281/zenodo.6463952},
publisher = {Zenodo},
url = {https://zenodo.org/record/6463952}
}
Feel free to star the repo as well if you find it useful or interesting. Thanks 😊!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for finetuning-scheduler-0.2.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 41bed4e0952be65fde0f7a703732f1d8b52359aef58dd23113397a013cb9b1d8 |
|
MD5 | 3710b1d1f0c44b6da1109bebcd18e58f |
|
BLAKE2b-256 | 3ef71013ac4a6f6daf37e9167e7cbf3afbc0f2c372e82503587379b65abf93b5 |
Hashes for finetuning_scheduler-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 844ee40154d77d80fbb8db1824e1e5b38098f3b58b2f5710346c17e5c46c1d49 |
|
MD5 | 3434f60802a830a07ed0f0becb0025a0 |
|
BLAKE2b-256 | 9eab1c381013735dc258ff0885d1e8782e60abb24c6d9f59242155d53e9fe494 |