Skip to main content

ZigZag - Deep Learning Hardware Design Space Exploration

Project description

ZigZag

This repository presents the novel version of our tried-and-tested HW Architecture-Mapping Design Space Exploration (DSE) Framework for Deep Learning (DL) accelerators. ZigZag bridges the gap between algorithmic DL decisions and their acceleration cost on specialized accelerators through a fast and accurate HW cost estimation.

A crucial part in this is the mapping of the algorithmic computations onto the computational HW resources and memories. In the framework, multiple engines are provided that can automatically find optimal mapping points in this search space.

Installation

Please take a look at the Installation page of our documentation.

Getting Started

Please take a look at the Getting Started page on how to get started using ZigZag.

Recent changes

In this novel version, we have:

  • Added an interface with ONNX to directly parse ONNX models
  • Overhauled our HW architecture definition to:
    • include multi-dimensional (>2D) MAC arrays.
    • include accurate interconnection patterns.
    • include multiple flexible accelerator cores.
  • Enhanced the cost model to support complex memories with variable port structures.
  • Revamped the whole project structure to be more modular.
  • Written the project with OOP paradigms to facilitate user-friendly extensions and interfaces.

Publication pointers

The general idea of ZigZag

L. Mei, P. Houshmand, V. Jain, S. Giraldo and M. Verhelst, "ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators," in IEEE Transactions on Computers, vol. 70, no. 8, pp. 1160-1174, 1 Aug. 2021, doi: 10.1109/TC.2021.3059962. paper

Detailed latency model explanation

L. Mei, H. Liu, T. Wu, H. E. Sumbul, M. Verhelst and E. Beigne, "A Uniform Latency Model for DNN Accelerators with Diverse Architectures and Dataflows," 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerp, Belgium, 2022, pp. 220-225, doi: 10.23919/DATE54114.2022.9774728. paper, slides, video

The new temporal mapping search engine

A. Symons, L. Mei and M. Verhelst, "LOMA: Fast Auto-Scheduling on DNN Accelerators through Loop-Order-based Memory Allocation," 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington DC, DC, USA, 2021, pp. 1-4, doi: 10.1109/AICAS51828.2021.9458493. paper

Apply ZigZag for different design space exploration case studies

P. Houshmand, S. Cosemans, L. Mei, I. Papistas, D. Bhattacharjee, P. Debacker, A. Mallik, D. Verkest, M. Verhelst, "Opportunities and Limitations of Emerging Analog in-Memory Compute DNN Architectures," 2020 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2020, pp. 29.1.1-29.1.4, doi: 10.1109/IEDM13553.2020.9372006. paper

V. Jain, L. Mei and M. Verhelst, "Analyzing the Energy-Latency-Area-Accuracy Trade-off Across Contemporary Neural Networks," 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington DC, DC, USA, 2021, pp. 1-4, doi: 10.1109/AICAS51828.2021.9458553. paper

S. Colleman, T. Verelst, L. Mei, T. Tuytelaars and M. Verhelst, "Processor Architecture Optimization for Spatially Dynamic Neural Networks," 2021 IFIP/IEEE 29th International Conference on Very Large Scale Integration (VLSI-SoC), Singapore, Singapore, 2021, pp. 1-6, doi: 10.1109/VLSI-SoC53125.2021.9607013. paper

Extend ZigZag to support cross-layer depth-first scheduling

L. Mei, K. Goetschalckx, A. Symons and M. Verhelst, " DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling," 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA), 2023 paper, github

Extend ZigZag to support multi-core layer-fused scheduling

A. Symons, L. Mei, S. Colleman, P. Houshmand, S. Karl and M. Verhelst, “Towards Heterogeneous Multi-core Accelerators Exploiting Fine-grained Scheduling of Layer-Fused Deep Neural Networks”, arXiv e-prints, 2022. doi:10.48550/arXiv.2212.10612. paper, github

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zigzag-dse-2.1.3.tar.gz (111.7 kB view details)

Uploaded Source

Built Distribution

zigzag_dse-2.1.3-py3-none-any.whl (145.9 kB view details)

Uploaded Python 3

File details

Details for the file zigzag-dse-2.1.3.tar.gz.

File metadata

  • Download URL: zigzag-dse-2.1.3.tar.gz
  • Upload date:
  • Size: 111.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.5

File hashes

Hashes for zigzag-dse-2.1.3.tar.gz
Algorithm Hash digest
SHA256 730e9f29164a044f2b25c8cd7148105e02e0614565246672dd1c292fe15f46b0
MD5 0cbac0c1edeee31f0535f83bd32b59f5
BLAKE2b-256 94175f575b537574c345b1e375fe90ed09de074308e925182de18d43302a5e98

See more details on using hashes here.

File details

Details for the file zigzag_dse-2.1.3-py3-none-any.whl.

File metadata

  • Download URL: zigzag_dse-2.1.3-py3-none-any.whl
  • Upload date:
  • Size: 145.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.5

File hashes

Hashes for zigzag_dse-2.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 3614ba2ef7a4f736d14f5e4f3ab619c6f481b5e151b42b3ddd278a57574940eb
MD5 c34b3247a8c4ea11fa9ac071af5c2ba2
BLAKE2b-256 9bc5521e2b7d0fa30d6a5e44decc6da5cf5ace8e36d66c3a427f9b7f76d15ba4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page