An Open Source Machine Learning Library for Training Binarized Neural Networks
Project description
Larq
Larq is an open source machine learning library for training Quantized Neural Networks (QNNs) with extremely low precision weights and activations (e.g. 1-bit). Existing Deep Neural Networks tend to be large, slow and power-hungry, prohibiting many applications in resource-constrained environments. Larq is designed to provide an easy to use, composable way to train QNNs (e.g. Binarized Neural Networks) based on the tf.keras
interface.
Getting Started
To build a QNN Larq introduces the concept of Quantized Layers and Quantizers. A Quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each Quantized Layer requires a input_quantizer
and kernel_quantizer
that describes the way of quantizing the activation of the previous layer and the weights respectively. If both input_quantizer
and kernel_quantizer
are None
the layer is equivalent to a full precision layer.
You can define a binarized densely-connected layer using the Straight-Through Estimator the following way:
larq.layers.QuantDense(
32,
input_quantizer="ste_sign",
kernel_quantizer="ste_sign",
kernel_constraint="weight_clip",
)
This layer can be used inside a keras model or with a custom training loop.
Examples
Checkout our examples on how to train a Binarized Neural Network in just a few lines of code:
Requirements
Before installing Larq, please install:
- Python version
3.6
or3.7
- Tensorflow version
1.13+
or2.0.0
You can also checkout one of our prebuilt docker images.
Installation
You can install Larq with Python's pip package manager:
pip install larq
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.