Skip to main content

Hash-based Deep Learning

Project description

Table of Contents

  1. Overview
  2. Install
    1. Requirement
    2. Install from PyPI
    3. Install from Source
  3. Features
  4. Implementation

Overview

This repository is non-official third-paty re-implementation of SLIDE1.

We provide

  • Python package
  • Hash based Deep Learning
  • Parallel computing based on C++17 parallel STL

We don't provide

  • Explicit CPU optimized code like AVX (We just rely on compiler optimization)
  • Compiled binary (You need to compile by yourself)

Install

There are two options, "Install from PyPI" and "Install from Source". For ordinary user, "Install from PyPI" is recommended.

For both case, sufficient C++ compiler is neccessary.

Requirement

  • Recent C++ compiler with parallel STL algorithm support
  • Python 3

Requirements can be installed on Docker image gcc:10.

# On local machine
docker run -it gcc:10 bash

# On gcc:10 image
apt update && apt install -y python3-pip libtbb-dev

Install from PyPI

pip install HashDL

Install from Source

git clone https://gitlab.com/ymd_h/hashdl.git HashDL
cd HashDL
pip install .

Features

  • Neural Network
    • hash-based sparse dense layer
  • Activation
    • ReLU
    • linear (no activation)
    • sigmoid
  • Optimizer
    • SGD
    • Adam2
  • Weight Initializer
    • constant
    • Gauss distribution
  • Hash for similarity
    • WTA
    • DWTA3
  • Scheduler for hash update
    • constant
    • exponential decay

In the current architecture, CNN is impossible.

Implementation

The official reference implementation focused on performance and accepted some "dirtyness" like hard-coded magic number for algotihm selection and unmanaged memory allocation.

We accept some (but hopefully small) overhead and improve maintenability in terms of software;

  • Polymorphism with inheritance and virtual function
  • RAII and smart pointer for memory management

These archtecture allows us to construct and manage C++ class from Python without recompile.

We also rely recent C++ standard and compiler optimization;

  • Parallel STL from C++17
  • Because of RVO (or at least move semantics), returning std::vector is not so much costful as it was.

Footnotes

1 B. Chen et al., "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems", MLSys 2020 (arXiv, code)

2 D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization", ICLR (2015) (arXiv)

3 B. Chen et al., "Densified Winner Take All (WTA) Hashing for Sparse Datasets", Uncertainty in artificial intelligence (2018)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

HashDL-2.0.0.tar.gz (177.5 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page