Skip to main content

Torchshard: Slicing a PyTorch Tensor Into Parallel Shards.

Project description

TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards. It can reduce GPU memory and scale up the training when the model has massive linear layers (e.g., ViT, BERT and GPT) or huge classes (millions). It has the same API design as PyTorch.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

torchshard-0.1.0-py3-none-any.whl (21.0 kB view details)

Uploaded Python 3

File details

Details for the file torchshard-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: torchshard-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 21.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.0 pkginfo/1.4.2 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for torchshard-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9c2b7f82968f9a4cf61bd557e3230b209feb19581381076ccab09c20fa17e64d
MD5 0f347c5e8d2e6f9526bf7a1e4a56c966
BLAKE2b-256 50ee24b7d33845184f389fa48f117c7dcde7d0c41da32b31eaaf24b41a234fe7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page