Pytorch based library for robust prototyping, standardized benchmarking, and effortless experiment management
Project description
Flambé
Welcome to Flambé, a PyTorch-based library that allows users to:
Run complex experiments with multiple training and processing stages
Search over hyperparameters, and select the best trials
Run experiments remotely over many workers, including full AWS integration
Easily share experiment configurations, results, and model weights with others
Installation
From PIP:
pip install flambe # CPU Version
# OR
pip install flambe[cuda] # With GPU / CUDA support
From source:
git clone git@github.com:Open-ASAPP/flambe.git
cd flambe
pip install .
Getting started
Define an Experiment:
!Experiment
name: sst-text-classification
pipeline:
# stage 0 - Load the Stanford Sentiment Treebank dataset and run preprocessing
dataset: !SSTDataset
transform:
text: !TextField
label: !LabelField
# Stage 1 - Define a model
model: !TextClassifier
embedder: !Embedder
embedding: !torch.Embedding # automatically use pytorch classes
num_embeddings: !@ dataset.text.vocab_size
embedding_dim: 300
embedding_dropout: 0.3
encoder: !PooledRNNEncoder
input_size: 300
n_layers: !g [2, 3, 4]
hidden_size: 128
rnn_type: sru
dropout: 0.3
output_layer: !SoftmaxLayer
input_size: !@ model.embedder.encoder.rnn.hidden_size
output_size: !@ dataset.label.vocab_size
# Stage 2 - Train the model on the dataset
train: !Trainer
dataset: !@ dataset
model: !@ model
train_sampler: !BaseSampler
val_sampler: !BaseSampler
loss_fn: !torch.NLLLoss
metric_fn: !Accuracy
optimizer: !torch.Adam
params: !@ train.model.trainable_params
max_steps: 10
iter_per_step: 100
# Stage 3 - Eval on the test set
eval: !Evaluator
dataset: !@ dataset
model: !@ train.model
metric_fn: !Accuracy
eval_sampler: !BaseSampler
# Define how to schedule variants
schedulers:
train: !tune.HyperBandScheduler
All objects in the pipeline are subclasses of Component, which are automatically registered to be used with YAML. Custom Component implementations must implement run to add custom behavior when being executed.
Now just execute:
flambe example.yaml
Note that defining objects like model and dataset ahead of time is optional; it’s useful if you want to reference the same model architecture multiple times later in the pipeline.
Progress can be monitored via the Report Site (with full integration with Tensorboard).
Features
Native support for hyperparameter search: using search tags (see !g in the example) users can define multi variant pipelines. More advanced search algorithms will be available in a coming release!
Remote and distributed experiments: users can submit Experiments to Clusters which will execute in a distributed way. Full AWS integration is supported.
Visualize all your metrics and meaningful data using Tensorboard: log scalars, histograms, images, hparams and much more.
Add custom code and objects to your pipelines: extend flambé functionality using our easy-to-use extensions mechanism.
Modularity with hierarchical serialization: save different components from pipelines and load them safely anywhere.
Next Steps
Full documentation, tutorials and much more in https://flambe.ai
Contact
You can reach us at flambe@asapp.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for flambe-0.4.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 72fd6aedd0387c60f8705a43cf130aa80d1ed307f2bda31f0adea744d9d2cee8 |
|
MD5 | ab7768bc78ddb6a4d48a9b158f77ef39 |
|
BLAKE2b-256 | 4ef98bb71610983fc0c0a53c22cf7e61c9e1c528099f2fc7fcea2b194d8d1a21 |