A collection of extensions and addons for fastai
Project description
fastxtend
fastxtend (fastai extended) is a collection of tools, extensions, and addons for fastai
Feature overview
General Features
- Fused optimizers which are 21 to 293 percent faster relative to fastai native optimizers.
- Flexible metrics which can log on train, valid, or both. Backwards compatible with fastai metrics.
- Easily use multiple losses and log each individual loss on train and valid.
- A simple profiler for profiling fastai training.
Vision
- Increase training speed using
ProgressiveResize
to automaticly apply progressive resizing. - Apply
MixUp
,CutMix
, or Augmentations withCutMixUp
orCutMixUpAugment
. - Additional image augmentations.
- Support for running fastai batch transforms on CPU.
- More attention and pooling modules
- A flexible implementation of fastai’s
XResNet
.
Audio
TensorAudio
,TensorSpec
,TensorMelSpec
objects which maintain metadata and support plotting themselves using librosa.- A selection of performant audio augmentations inspired by fastaudio and torch-audiomentations.
- Uses TorchAudio to quickly convert
TensorAudio
waveforms intoTensorSpec
spectrograms orTensorMelSpec
mel spectrograms using the GPU. - Out of the box support for converting one
TensorAudio
to one or multipleTensorSpec
orTensorMelSpec
objects from the Datablock api. - Audio MixUp and CutMix Callbacks.
audio_learner
which merges multipleTensorSpec
orTensorMelSpec
objects before passing to the model.
Check out the documentation for additional splitters, callbacks, schedulers, utilities, and more.
Documentation
https://fastxtend.benjaminwarner.dev
Install
fastxtend is avalible on pypi:
pip install fastxtend
To install with dependencies for vision, audio, or all tasks run one of:
pip install fastxtend[vision]
pip install fastxtend[audio]
pip install fastxtend[all]
Or to create an editable install:
git clone https://github.com/warner-benjamin/fastxtend.git
cd fastxtend
pip install -e ".[dev]"
Usage
Like fastai, fastxtend provides safe wildcard imports using python’s
__all__
.
from fastai.vision.all import *
from fastxtend.vision.all import *
In general, import fastxtend after all fastai imports, as fastxtend modifies fastai. Any method modified by fastxtend is backwards compatible with the original fastai code.
Examples
Use a fused ForEach optimizer:
Learner(..., opt_func=adam(fused=True))
Log an accuracy metric on the training set as a smoothed metric and validation set like normal:
Learner(..., metrics=[Accuracy(log_metric=LogMetric.Train, metric_type=MetricType.Smooth),
Accuracy()])
Log multiple losses as individual metrics on train and valid:
mloss = MultiLoss(loss_funcs=[nn.MSELoss, nn.L1Loss],
weights=[1, 3.5], loss_names=['mse_loss', 'l1_loss'])
Learner(..., loss_func=mloss, metrics=RMSE(), cbs=MultiLossCallback)
Apply MixUp, CutMix, or Augmentation while training:
Learner(..., cbs=CutMixUpAugment)
Profile a fastai training loop:
from fastxtend.callback import simpleprofiler
learn = Learner(...).profile()
learn.fit_one_cycle(2, 3e-3)
Train in channels last format:
Learner(...).to_channelslast()
Requirements
fastxtend requires fastai to be installed. See http://docs.fast.ai for installation instructions.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for fastxtend-0.0.16-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | af4be41eaaa32bb4d7aefaa4ca6c33781dc383498be9d3ff6370e745708dff27 |
|
MD5 | 177e3a0cfde55d1f97bc4a7439732cb5 |
|
BLAKE2b-256 | 7af212462bf4b3e251b56b506121569eb79da53a6d80441be2d6d0ddf7e145b7 |