Skip to main content

Large-Scale Machine and Deep Learning in PyTorch.

Project description


PyPi License

PyBlaze is an unobtrusive, high-level library for large-scale machine and deep learning in PyTorch. It is engineered to cut obsolete boilerplate code while preserving the flexibility of PyTorch to create just about any deep learning model.


Plenty of tutorials are available in the official documentation. The most basic tutorial builds a classifier for CIFAR10.


PyBlaze is available on PyPi and can simply be installed as follows:

pip install pyblaze

Library Design

PyBlaze revolves around the concept of an engine. An engine is a powerful abstraction for combining a model's definition with the algorithm required to optimize its parameters according to some data. Engines provided by PyBlaze are focused on generalization: while the engine encapsulates the optimization algorithm, the user must explicitly define the optimization objective (usually the loss function).

However, engines go far beyond implementing the optimization algorithm. Specifically, they further provide the following features:

  • Evaluation: During training, validation data can be used to evaluate the generalization performance of the trained model every so often. Also, arbitrary metrics may be computed.

  • Callbacks: During training and model evaluation, callbacks serve as hooks called at specific events in the process. This makes it possible to easily use some tracking framework, perform early stopping, or dynamically adjust parameters over the course of the training. Custom callbacks can easily be created.

  • GPU Support: Training and model evaluation is automatically performed on all available GPUs. The same code that works for the CPU works for the GPU ... and also for multiple GPUs.

Available Engines

Engines are currently implemented for the following training procedures:

  • pyblaze.nn.MLEEngine: This is the most central engine as it enables supervised as well as unsupervised learning. It can therefore adapt to multiple different problems: classification, regression, (variational) autoencoders, ..., depending on the loss only. In order to simplify initialization (as configuration requires toggling some settings), there exist some specialized MLE engines. Currently, the only one is pyblaze.nn.AutoencoderEngine.

  • pyblaze.nn.WGANEngine: This engine is specifically designed for training Wasserstein GANs. This class is required due to the independent training of generator and critic.

Implementing your custom engine is rarely necessary for most common problems. However, when working on highly customized machine learning models, it might be a good idea. Usually, it is sufficient to implement the train_batch and eval_batch methods to specify how to perform training and evaluation, respectively, for a single batch of data. Consult the documentation of pyblaze.nn.Engine to read about all methods available for override.


PyBlaze is licensed under the MIT License.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyblaze-2.3.2.tar.gz (47.2 kB view hashes)

Uploaded source

Built Distribution

pyblaze-2.3.2-py3-none-any.whl (66.0 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page