Skip to main content

ZigZag - Deep Learning Hardware Design Space Exploration

Project description

ZigZag Documentation Tutorial

This repository presents the novel version of our tried-and-tested HW Architecture-Mapping Design Space Exploration (DSE) Framework for Deep Learning (DL) accelerators. ZigZag bridges the gap between algorithmic DL decisions and their acceleration cost on specialized accelerators through a fast and accurate HW cost estimation.

A crucial part in this is the mapping of the algorithmic computations onto the computational HW resources and memories. In the framework, multiple engines are provided that can automatically find optimal mapping points in this search space.

Installation

Please take a look at the Installation page of our documentation.

Getting Started

Please take a look at the Getting Started page on how to get started using ZigZag.

Also, a Jupyter Notebook based demo is prepared for new users here.

Recent changes

In this novel version, we have:

  • Integrated ZigZag-IMC into the framework, enabling definition of both digital cores and In-Memory-Computing cores via the user interface.
  • Added yaml (.yml) files as an additional output format when the result is completely saved.
  • Added optional functions to remove unused top memories in the HW architecture.
  • Added an interface with ONNX to directly parse ONNX models
  • Overhauled our HW architecture definition to:
    • include multi-dimensional (>2D) MAC arrays.
    • include accurate interconnection patterns.
    • include multiple flexible accelerator cores.
  • Enhanced the cost model to support complex memories with variable port structures.
  • Revamped the whole project structure to be more modular.
  • Written the project with OOP paradigms to facilitate user-friendly extensions and interfaces.

Publication pointers

The general idea of ZigZag

L. Mei, P. Houshmand, V. Jain, S. Giraldo and M. Verhelst, "ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators," in IEEE Transactions on Computers, vol. 70, no. 8, pp. 1160-1174, 1 Aug. 2021, doi: 10.1109/TC.2021.3059962. paper

Detailed latency model explanation

L. Mei, H. Liu, T. Wu, H. E. Sumbul, M. Verhelst and E. Beigne, "A Uniform Latency Model for DNN Accelerators with Diverse Architectures and Dataflows," 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerp, Belgium, 2022, pp. 220-225, doi: 10.23919/DATE54114.2022.9774728. paper, slides, video

The new temporal mapping search engine

A. Symons, L. Mei and M. Verhelst, "LOMA: Fast Auto-Scheduling on DNN Accelerators through Loop-Order-based Memory Allocation," 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington DC, DC, USA, 2021, pp. 1-4, doi: 10.1109/AICAS51828.2021.9458493. paper, slides, video

Apply ZigZag for different design space exploration case studies

P. Houshmand, S. Cosemans, L. Mei, I. Papistas, D. Bhattacharjee, P. Debacker, A. Mallik, D. Verkest, M. Verhelst, "Opportunities and Limitations of Emerging Analog in-Memory Compute DNN Architectures," 2020 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2020, pp. 29.1.1-29.1.4, doi: 10.1109/IEDM13553.2020.9372006. paper, slides, video

V. Jain, L. Mei and M. Verhelst, "Analyzing the Energy-Latency-Area-Accuracy Trade-off Across Contemporary Neural Networks," 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington DC, DC, USA, 2021, pp. 1-4, doi: 10.1109/AICAS51828.2021.9458553. paper, slides, video

S. Colleman, T. Verelst, L. Mei, T. Tuytelaars and M. Verhelst, "Processor Architecture Optimization for Spatially Dynamic Neural Networks," 2021 IFIP/IEEE 29th International Conference on Very Large Scale Integration (VLSI-SoC), Singapore, Singapore, 2021, pp. 1-6, doi: 10.1109/VLSI-SoC53125.2021.9607013. paper, slides, video

S. Colleman, P. Zhu, W. Sun and M. Verhelst, "Optimizing Accelerator Configurability for Mobile Transformer Networks," 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Incheon, Korea, Republic of, 2022, pp. 142-145, doi: 10.1109/AICAS54282.2022.9869945. paper, slides, video

Extend ZigZag to support cross-layer depth-first scheduling

L. Mei, K. Goetschalckx, A. Symons and M. Verhelst, " DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling," 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA), 2023 paper, slides, github

Extend ZigZag to support multi-core layer-fused scheduling

A. Symons, L. Mei, S. Colleman, P. Houshmand, S. Karl and M. Verhelst, “Towards Heterogeneous Multi-core Accelerators Exploiting Fine-grained Scheduling of Layer-Fused Deep Neural Networks”, arXiv e-prints, 2022. doi:10.48550/arXiv.2212.10612. paper, github

S. Karl, A. Symons, N. Fasfous and M. Verhelst, "Genetic Algorithm-based Framework for Layer-Fused Scheduling of Multiple DNNs on Multi-core Systems," 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerp, Belgium, 2023, pp. 1-6, doi: 10.23919/DATE56975.2023.10137070. paper, slides, video

Extend ZigZag to support In-Memory-Computing cores

J. Sun, P. Houshmand and M. Verhelst, "Analog or Digital In-Memory Computing? Benchmarking through Quantitative Modeling," Proceedings of the IEEE/ACM Internatoinal Conference On Computer Aided Design (ICCAD), October 2023. paper, poster, slides, video

P. Houshmand, J. Sun and M. Verhelst, "Benchmarking and modeling of analog and digital SRAM in-memory computing architectures," arXiv preprint arXiv:2305.18335 (2023). paper

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zigzag_dse-3.3.0.tar.gz (2.9 MB view details)

Uploaded Source

Built Distribution

zigzag_dse-3.3.0-py3-none-any.whl (3.0 MB view details)

Uploaded Python 3

File details

Details for the file zigzag_dse-3.3.0.tar.gz.

File metadata

  • Download URL: zigzag_dse-3.3.0.tar.gz
  • Upload date:
  • Size: 2.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.9

File hashes

Hashes for zigzag_dse-3.3.0.tar.gz
Algorithm Hash digest
SHA256 62bb62da054cc2163bd4ac7b2e6a1cbb4d4f61edacc91075bf400dac54b28642
MD5 6024578270408ea3192c6fe0520c8d6c
BLAKE2b-256 6d257db0cde7d443eb79a3317912dc9c05fb2c971e68cfd82af0d673f58e4958

See more details on using hashes here.

File details

Details for the file zigzag_dse-3.3.0-py3-none-any.whl.

File metadata

  • Download URL: zigzag_dse-3.3.0-py3-none-any.whl
  • Upload date:
  • Size: 3.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.9

File hashes

Hashes for zigzag_dse-3.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fd579a99f38fb119ab76cb6d3bcb8a2e33fbeb230d8ff671fcaeb92c85c5b760
MD5 8bd55ad4bfbeaa4a50b79431ee3aafb8
BLAKE2b-256 039ab3ffb02e5adb60091acfd254000ac764041c7f3747c5368a5e9337f436ef

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page