Skip to main content

A library for accelerating knowledge distillation training

Project description

efficient-knowledge-distillation

Efficient KD using TensorRT inference on teacher network

Background

Knowledge Distillation (KD) refers to the practice of using the outputs of a large teacher network train a (usually) smaller student network. This project leverages TensorRT to accelerate this process. It is common practice in KD, especially dark knowledge type techniques to pre-compute the logits from the teacher network and save them to disk. For training the student network, the pre-computed logits are used as is to teach the student. This saves GPU resources as one does not need to load the large teacher network to GPU memory during training.

Problem

In A good teacher is patient and consistent, Beyer et. al. find that pre-computing logits is sub-optimal and hurts performance. The transformations applied to input (ex. blur, color jitter to images) are different between teacher and student, so the teacher logits do not correspond to the inputs seen by the student. Instead, for optimal knowdge distillation, the outputs from the teacher network should be computed exactly on the same input seen by the student.

How to achieve the best techer inference and student training performance on a GPU?

Solution

  • Use TensorRT to set up an inference engine and perform blazing fast inference
  • Use logits from TensorRT inference to train the student network.

Environment

  • Recommended method This project uses pytorch CUDA, tensorrt>=8, opencv and pycuda. The recommended way to get all these is to use an NGC docker container with a recent version of PyTorch.
sudo docker run -it --ipc=host --net=host --gpus all nvcr.io/nvidia/pytorch:22.08-py3 /bin/bash
#if you want to load an external disk to the container, use the --volume switch

#Once the container is up and running, install pycuda
pip install pycuda
git clone https://github.com/dataplayer12/efficient-knowledge-distillation.git
cd efficient-knowledge-distillation

#Test tensorRT engine with
python3 testtrt.py
  • Custom env

If you want to use your own environment with PyTorch, you need to get TensorRT and pycuda.

Follow the official guide to download TensorRT deb file and install it with the script provided in this repo. Finally install pycuda

git clone https://github.com/dataplayer12/efficient-knowledge-distillation.git
cd efficient-knowledge-distillation
bash install_trt.sh
# if needed modify the version of deb file in the script before running.
# This script will also install pycuda
# this might fail for a number of reasons which is why NGC container is recommended

python3 testtrt.py #test if everything works

Status

  • The core TensorRT functionality works well (can also be used for pure inference)
  • TensorRT accelerated training is verified (accelerate inference on teacher network with TRT)
  • Implemented Soft Target Loss by Hinton et. al.
  • Implemented Hard Label Distillation by Touvron et. al.

ToDo

  • Improve TRT inference and training by transfering input only once.
  • Benchmark dynamic shapes on TRT
  • Better documentation
  • Make PyPi package

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

darklight-0.1-py3-none-any.whl (36.1 kB view details)

Uploaded Python 3

File details

Details for the file darklight-0.1-py3-none-any.whl.

File metadata

  • Download URL: darklight-0.1-py3-none-any.whl
  • Upload date:
  • Size: 36.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.10

File hashes

Hashes for darklight-0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1834d979e0fcfa69968fa0d5c75ef730af7cb5615247bf788a96e4d4054ee6d5
MD5 fc8f214f889b7d105ce95fb38dca93a0
BLAKE2b-256 1d449f2d4ba96dc9a3abc0bb127cb0c2332480fc4c095217052aeec39e32b275

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page