Skip to main content

This package provides the function to auto-choose the cuda device hase largest free memory in Pytorch

Project description

The author of this package has not provided a project description

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

autocuda-0.16-py3-none-any.whl (5.1 kB view details)

Uploaded Python 3

File details

Details for the file autocuda-0.16-py3-none-any.whl.

File metadata

  • Download URL: autocuda-0.16-py3-none-any.whl
  • Upload date:
  • Size: 5.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.8

File hashes

Hashes for autocuda-0.16-py3-none-any.whl
Algorithm Hash digest
SHA256 c33398872f4c9336815dce158400438d616b8e1616d7ddfde5c9a203b71ec856
MD5 b8ed6d97b0c91eb3f090535c9a160ffd
BLAKE2b-256 751c44cc76f86e2584e4ca84b3a3a2952abe5f85fd9a0d679302a0d85b54d8ee

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page