An Open Source Machine Learning Library for Training Binarized Neural Networks
Larq is an open source machine learning library for training Quantized Neural Networks (QNNs) with extremely low precision weights and activations (e.g. 1-bit). Existing Deep Neural Networks tend to be large, slow and power-hungry, prohibiting many applications in resource-constrained environments. Larq is designed to provide an easy to use, composable way to train QNNs (e.g. Binarized Neural Networks) based on the
To build a QNN Larq introduces the concept of Quantized Layers and Quantizers. A Quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each Quantized Layer requires a
kernel_quantizer that describes the way of quantizing the activation of the previous layer and the weights respectively. If both
None the layer is equivalent to a full precision layer.
You can define a binarized densely-connected layer using the Straight-Through Estimator the following way:
larq.layers.QuantDense( 32, input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip", )
Checkout our examples on how to train a Binarized Neural Network in just a few lines of code:
Before installing Larq, please install:
You can also checkout one of our prebuilt docker images.
You can install Larq with Python's pip package manager:
pip install larq
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size & hash SHA256 hash help||File type||Python version||Upload date|
|larq-0.1.1-py3-none-any.whl (26.1 kB) Copy SHA256 hash SHA256||Wheel||py3|
|larq-0.1.1.tar.gz (18.9 kB) Copy SHA256 hash SHA256||Source||None|