Skip to main content

Automatically install CPU or GPU tensorflow determined by looking for a CUDA installation.

Project description

Solves the absolute mess that is installing tensorflow from your package requirements.

Just put it in your setuptools requirements and it’ll install the GPU version of tensorflow if you have a CUDA available; otherwise it will fallback to the CPU version.

ci-badge coverage-badge

The problem

Background: Tensorflow provides two pypi packages: tensorflow and tensorflow-gpu. I’m sure you can infer which is compiled for which.

Let’s say you’re making a product or have a project:

  • It uses Tensorflow.

  • It will run on a myriad of classes of machines, and therefore needs to be able to operate and install on machines that may or may not have CUDA available.

  • You want to optimistically utilize a GPU if you can, but fallback to using just CPU.

  • You want your package to be easily installable. Every time. Not just for half of your userbase.

  • You don’t want to have toss the problem on your users and fellow engineers.

Let’s be honest, your stringently detailed absoute step-by-step microbe resistant instructions could.. maybe.. possibly.. not be followed and/or remembered exactly to a tee. You know, like everything else. Tech is hard.

Q: How do you add the proper one to your own package as a requirement when they both install to the same package namespace?

A: Well, you can’t. 🕶

Oh, and by the way, if you install one after the other, say to fix if you installed the wrong one at first, I’ll just blindly trample over the first’s files instead of doing anything useful. (Note: this is not Tensorflow’s fault here, but a more widespread problem in pyland)

You can try to choke it down with sweet hacks like:

  • Adding each in their own extras_require blocks (ruining your default and therefore path of least resistance)

  • Toss the problem onto the user. You’re that guy now.

How does it work?

This package includes a list of all libraries required by each version of tensorflow in tfdetect/cuda.py.

Currently, only the check for the cuda runtime library is active (by design) to be a bit more forgiving and allowing for “strange” setups, even though I’ve yet to come across one. This is the finest heuristic for this anyway, so it works out nicely, even in the face of your sweet sweet custom compiles; not to mention ensuring optional dependencies stay that way.

Versions beneath 1.7 have not been tested, so the CUDA versions are likely to be mismatched. Was too far in the past to care, I’m sorry.

Usage

In requirements.txt or install_requires:

tensorflow-auto-detect==1.10.0  # or any other release, 1to1 mapping to tf releases

As pip argument:

pip install tensorflow-auto-detect==1.8.1  # or any other release, 1to1 mapping to tf releases

Running tests

pip install '.[tests]'
pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorflow-auto-detect-1.11.0.tar.gz (7.9 kB view details)

Uploaded Source

File details

Details for the file tensorflow-auto-detect-1.11.0.tar.gz.

File metadata

File hashes

Hashes for tensorflow-auto-detect-1.11.0.tar.gz
Algorithm Hash digest
SHA256 8e6c6e2e3342192faccc05e152af858f335f6c6d2eafccb899fdfa433034003f
MD5 5c3932456641dbbfc0cf5e7ea7bacb51
BLAKE2b-256 2c44136c8f2ab04dd771c116f198bf19c63d02622123f50e1ae1803bf03a4066

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page