Skip to main content

A Chisel based hardware generation library for deeply quantized neural networks.

Project description

Chisel4ml

Chisel4ml is an open-source library for generating highly-parallel dataflow style hardware implementations of Deeply Quantized Neural Networks. These types of networks are trained using frameworks such as Brevitas and QKeras. However, any training framework is supported as long as it export QONNX.

Instalation: from pip

  1. pip install chisel4ml.
  2. Download a matching jar from github relases.
  3. To test first run java -jar chisel4ml.jar (You can change the port and temporary directory using -p and -d (use --help for info)
  4. Paste the Python code bellow into a file and run python script.py
import numpy as np
import qkeras
import tensorflow as tf
from chisel4ml import optimize, generate

w1 = np.array([[1, 2, 3, 4], [-4, -3, -2, -1], [2, -1, 1, 1]])
b1 = np.array([1, 2, 0, 1])
w2 = np.array([-1, 4, -3, -1]).reshape(4, 1)
b2 = np.array([2])

x = x_in = tf.keras.layers.Input(shape=3)
x = qkeras.QActivation(
    qkeras.quantized_bits(bits=4, integer=3, keep_negative=True)
)(x)
x = qkeras.QDense(
    4,
    kernel_quantizer=qkeras.quantized_bits(
        bits=4, integer=3, keep_negative=True, alpha=np.array([0.5, 0.25, 1, 0.25])
    ),
)(x)
x = qkeras.QActivation(qkeras.quantized_relu(bits=3, integer=3))(x)
x = qkeras.QDense(
    1,
    kernel_quantizer=qkeras.quantized_bits(
        bits=4, integer=3, keep_negative=True, alpha=np.array([0.125])
    ),
)(x)
x = qkeras.QActivation(qkeras.quantized_relu(bits=3, integer=3))(x)
model = tf.keras.Model(inputs=[x_in], outputs=[x])
model.compile()
model.layers[2].set_weights([w1, b1])
model.layers[4].set_weights([w2, b2])
data = np.array(
    [
        [0.0, 0.0, 0.0],
        [0.0, 1.0, 2.0],
        [2.0, 1.0, 0.0],
        [4.0, 4.0, 4.0],
        [7.0, 7.0, 7.0],
        [6.0, 0.0, 7.0],
        [3.0, 3.0, 3.0],
        [7.0, 0.0, 0.0],
        [0.0, 7.0, 0.0],
        [0.0, 0.0, 7.0],
    ]
)


opt_model = optimize.qkeras_model(model)
accelerators, lbir_model = generate.accelerators(
    model,
    minimize="delay"
)
circuit = generate.circuit(opt_model)
for x in data:
    sw_res = opt_model.predict(np.expand_dims(x, axis=0))
    hw_res = circuit(x)
    assert np.array_equal(sw_res.flatten(), hw_res.flatten())
circuit.delete_from_server()

This will generate a circuit of a simple two layer fully-connected neural network, and store it in /tmp/.chisel4ml/circuit0. If you have verilator installed you can also add the argument: use_verilator=True in the generate.circuit function. In the first case only a firrtl file be generated (this can be converted to verilog using firtool), if you use verilator, however, a SystemVerilog file will also be created.

chisel4ml also supports convolutional layers and maxpool layers. It also has some support for calculating FFTs and log-mel feature energy (audio features) in hardware.

Installation: from source

  1. Install mill build tool.
  2. Install python 3.8-3.10
  3. Create environment python -m venv venv/
  4. Activate environment (Linux)source venv/bin/activate
    • Windows .\venv\Scripts\activate
  5. Upgrade pip python -m pip install --upgrade pip
  6. Install chisel4ml pip install -ve .[dev]
  7. Build Python protobuf code make
  8. Build Scala code mill chisel4ml.assembly
  9. In another terminal run tests pytest --use-verilator -n auto
    • The --use-verilator flag is optional if you have verilator installed, however it is highly recommended, since it is much faster.

ScalaDocs

To create ScalaDocs run mill chisel4ml.docJar and they will be generated to out/chisel4ml/docJar.dest/javadoc.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chisel4ml-0.3.6.tar.gz (11.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chisel4ml-0.3.6-py3-none-any.whl (53.5 kB view details)

Uploaded Python 3

File details

Details for the file chisel4ml-0.3.6.tar.gz.

File metadata

  • Download URL: chisel4ml-0.3.6.tar.gz
  • Upload date:
  • Size: 11.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for chisel4ml-0.3.6.tar.gz
Algorithm Hash digest
SHA256 d1b117309118f8c8ab05746f86b115cfefaa5303117d2ef52e99e1ffd53916a7
MD5 91e0a8439892b7b46b69934bbf12517f
BLAKE2b-256 62bf8eea2f0b786b108e32b087013d41810e0929afb34e6714969ff8f38bb3fd

See more details on using hashes here.

Provenance

The following attestation bundles were made for chisel4ml-0.3.6.tar.gz:

Publisher: publish-to-pypi.yml on cs-jsi/chisel4ml

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file chisel4ml-0.3.6-py3-none-any.whl.

File metadata

  • Download URL: chisel4ml-0.3.6-py3-none-any.whl
  • Upload date:
  • Size: 53.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for chisel4ml-0.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 a30f9e57cdd15f3088f32b7d9f6f12b6135dd3c6edcd66c46d90778f09773e3c
MD5 5683959f5b17e109b50eda7747cd439c
BLAKE2b-256 3d2041aab7654c210fe09906ce9a5584c061f50613a3d1a2dca7cc8dba76a984

See more details on using hashes here.

Provenance

The following attestation bundles were made for chisel4ml-0.3.6-py3-none-any.whl:

Publisher: publish-to-pypi.yml on cs-jsi/chisel4ml

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page