Skip to main content

SC2 Benchmark: Supervised Compression for Split Computing.

Project description

SC2 Benchmark: Supervised Compression for Split Computing

PyPI version GitHub Discussions

This is the official repository of sc2bench package and our TMLR paper, "SC2 Benchmark: Supervised Compression for Split Computing".

As an intermediate option between local computing and edge computing (full offloading), split computing has been attracting considerable attention from the research communities.

In split computing, we split a neural network model into two sequences so that some elementary feature transformations are applied by the first sequence of the model on a weak mobile (local) device. Then, intermediate, informative features are transmitted through a wireless communication channel to a powerful edge server that processes the bulk part of the computation (the second sequence of the model).

R-D (rate-distortion), ExR-D, and Three-way tradeoffs ImageNet (ILSVRC 2012): R-D (rate-distortion), ExR-D, and three-way tradeoffs for input compression and supervised compression with ResNet-50 as a reference model

Input compression vs. Supervised compression

Input compression is an approach to save transmitted data, but it leads to transmitting information irrelevant to the supervised task. To achieve better supervised rate-distortion tradeoff, we define supervised compression as learning compressed representations for supervised downstream tasks such as classification, detection, or segmentation. Specifically for split computing, we term the problem setting SC2 (Supervised Compression for Split Computing).

Note that the training process can be done offline (i.e., on a single device without splitting), and it is different from "split learning".

SC2 Metrics

1. Encoder Size (to be minimized)

Local processing cost should be minimized as local (mobile) devices usually have battery constraints and limited computing power. As a simple proxy for the computing costs, we measure the number of encoder parameters and define the encoder size as the total number of bits to represent the parameters of the encoder.

2. Data Size (to be minimized)

We want to penalize large data being transferred from the mobile device to the edge server while the BPP does not penalize it when feeding higher resolution images to downstream models for achieving higher model accuracy.

3. Model Accuracy (to be maximized)

While minimizing the two metrics, we want to maximize model accuracy (minimize supervised distortion). Example supervised distortions are accuracy, mean average precision (mAP), and mean intersection over union (mIoU) for image classification, object detection, and semantic segmentation, respectively.

Installation

pip install sc2bench

Virtual Environments

For pipenv users,

pipenv install --python 3.8
# or create your own pipenv environment
pipenv install sc2bench 

For conda users,

conda env create -f environment.yaml
# or create your own conda environment
conda activate sc2-benchmark
pip install sc2bench

Datasets

See instructions here

Checkpoints

You can download our checkpoints including trained model weights here.
Unzip the downloaded zip files under ./, then there will be ./resource/ckpt/.

Supervised Compression

  1. CR + BQ: "Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks"
  2. End-to-End: "End-to-end Learning of Compressible Features"
  3. Entropic Student: "Supervised Compression for Resource-Constrained Edge Computing Systems"

README.md explains how to train/test implemented supervised compression methods.

Baselines: Input Compression

Each README.md gives instructions to run the baseline experiments.

Codec-based Feature Compression

# JPEG
python script/task/image_classification.py -test_only --config configs/ilsvrc2012/feature_compression/jpeg-resnet50.yaml
# WebP
python script/task/image_classification.py -test_only --config configs/ilsvrc2012/feature_compression/webp-resnet50.yaml

Citation

[Paper] [Preprint]

@article{matsubara2023sc2,
  title={{SC2 Benchmark: Supervised Compression for Split Computing}},
  author={Matsubara, Yoshitomo and Yang, Ruihan and Levorato, Marco and Mandt, Stephan},
  journal={Transactions on Machine Learning Research},
  issn={2835-8856},
  year={2023},
  url={https://openreview.net/forum?id=p28wv4G65d}
}

Note

For measuring data size per sample precisely, it is important to keep test batch size of 1 when testing.
E.g., some baseline modules may expect larger batch size if you have multiple GPUs.
Then, add CUDA_VISIBLE_DEVICES=0 before your execution command (e.g., sh, bash, python) so that you can force the script to use one GPU (use GPU: 0 in this case).

For instance, an input compression experiment using factorized prior (pretrained input compression model) and ResNet-50 (pretrained classifier)

CUDA_VISIBLE_DEVICES=0 sh script/neural_input_compression/ilsvrc2012-image_classification.sh factorized_prior-resnet50 8

Issues / Questions / Requests

The documentation is work-in-progress. In the meantime, feel free to create an issue if you find a bug.
If you have either a question or feature request, start a new discussion here.

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sc2bench-0.0.4.tar.gz (38.9 kB view details)

Uploaded Source

Built Distribution

sc2bench-0.0.4-py3-none-any.whl (47.1 kB view details)

Uploaded Python 3

File details

Details for the file sc2bench-0.0.4.tar.gz.

File metadata

  • Download URL: sc2bench-0.0.4.tar.gz
  • Upload date:
  • Size: 38.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for sc2bench-0.0.4.tar.gz
Algorithm Hash digest
SHA256 825e87737321aa6431c0867bbf7bf785614bbdfd4b937ba21b2c0e0447758dcd
MD5 0d34603ef16821feda401fbcd680d95c
BLAKE2b-256 9e633be7709342b7860124f967d4cba1e38e5ddacceb7954da8f2c3a79a18ec5

See more details on using hashes here.

File details

Details for the file sc2bench-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: sc2bench-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 47.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for sc2bench-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 55bbbab9f54714c6273ce9c16ace8f75dd637e4f86eb1b797d2150f8f0de13aa
MD5 4513175327b2c36709d4968c85e90b36
BLAKE2b-256 22cb534c500efbdaea9816674156d33bff385e1c8fc0c54249c01b309de4172d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page