Skip to main content

An intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration

Project description

๐Ÿš€ PyTorch Installation Assistant

An intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration to install the optimal PyTorch setup for your hardware.

โœจ Features

  • ๐Ÿง  Intelligent GPU Detection: Automatically detects NVIDIA, AMD, and Apple Silicon GPUs
  • ๐ŸŽฏ Smart CUDA Matching: Finds the best PyTorch version for your CUDA installation
  • ๐Ÿค– Autonomous CUDA Installation: Automatically installs CUDA on Windows using package managers
  • ๐Ÿ“ฆ Complete Ecosystem: Installs torch, torchvision, and torchaudio with version compatibility
  • ๐Ÿ”„ Fallback Logic: Handles older CUDA versions and compatibility issues gracefully
  • ๐ŸŽฎ Hardware-Specific Optimization: Tailored recommendations for different GPU generations
  • ๐Ÿ” Comprehensive Testing: Post-install verification with tensor operations
  • ๐Ÿ“Š Detailed Reporting: Shows complete system and package information

โš ๏ธ GPU Compatibility Notice

Testing Status: This installer has been primarily tested on GT 900 series and older GPUs, as well as GTX 10 series cards. While it should work with newer GPU generations (RTX 20/30/40 series), comprehensive testing across all NVIDIA GPU models is ongoing.

If you encounter issues with newer GPUs, please report them via GitHub issues to help improve compatibility.

๏ฟฝ๏ธ Installation

Simply download the torch_installer.py script - no additional dependencies required beyond Python's standard library.

# Download the script
curl -O https://raw.githubusercontent.com/coff33ninja/torch-installer/main/torch_installer.py

# Or clone the repository
git clone https://github.com/coff33ninja/torch-installer.git
cd torch-installer/pytorch-installer.git

๐Ÿš€ Quick Start

Basic Installation

# Automatic installation with smart detection
python torch_installer.py

# CPU-only installation
python torch_installer.py --cpu-only

# Force specific CUDA version
python torch_installer.py --force-cuda cu121

CUDA Auto-Installation (Windows Only)

# Auto-install recommended CUDA version
python torch_installer.py --auto-install-cuda

# Install specific CUDA version
python torch_installer.py --auto-install-cuda --cuda-version 12.1

# Dry-run to see what would be installed
python torch_installer.py --auto-install-cuda --dry-run

๏ฟฝ Commnand Reference

Core Installation Commands

Command Description Example
python torch_installer.py Auto-detect and install optimal PyTorch Basic usage
--cpu-only Force CPU-only installation python torch_installer.py --cpu-only
--force-cuda cu121 Force specific CUDA version python torch_installer.py --force-cuda cu121
--force-reinstall Reinstall even if PyTorch exists python torch_installer.py --force-reinstall

CUDA Management (Windows)

Command Description Example
--auto-install-cuda Automatically install CUDA python torch_installer.py --auto-install-cuda
--cuda-version 12.1 Specify CUDA version to install python torch_installer.py --auto-install-cuda --cuda-version 12.1

Information & Diagnostics

Command Description Example
--gpu-info Show GPU and CUDA compatibility python torch_installer.py --gpu-info
--show-versions Display installed PyTorch ecosystem python torch_installer.py --show-versions
--show-matching Demo CUDA version matching logic python torch_installer.py --show-matching
--list-cuda List supported CUDA versions python torch_installer.py --list-cuda

Development & Testing

Command Description Example
--dry-run Show commands without executing python torch_installer.py --dry-run
--log Log all output to timestamped file python torch_installer.py --log

๐ŸŽฎ GPU Support Matrix

NVIDIA GPUs

GPU Generation Recommended CUDA PyTorch Support Performance
RTX 40 Series CUDA 12.1+ โœ… Excellent ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
RTX 30 Series CUDA 12.1+ โœ… Excellent ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
RTX 20 Series CUDA 11.8+ โœ… Excellent ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
GTX 16 Series CUDA 11.8+ โœ… Very Good ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
GTX 10 Series CUDA 11.8+ โœ… Good ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
GT 700 Series CUDA 11.8 โš ๏ธ Limited ๐Ÿ”ฅ๐Ÿ”ฅ
Older GPUs Manual Install โŒ Not Recommended ๐Ÿ”ฅ

Other GPUs

GPU Type Support Recommendation
Apple Silicon (M1/M2/M3) โœ… MPS Support Automatic detection
AMD GPUs โš ๏ธ ROCm (Linux only) Manual ROCm installation
Intel GPUs โŒ Not supported Use CPU-only mode

๐Ÿ”ง Usage Examples

Scenario 1: First-time Installation

# Let the installer detect everything automatically
python torch_installer.py

# Output example:
# ๐Ÿš€ PyTorch Installation Assistant
# ๐ŸŽฎ Detected GPU: GeForce RTX 3080
# ๐Ÿš€ Detected CUDA version: 12.1
# ๐ŸŽฏ Installing PyTorch with CUDA 121 wheels
# โœ… PyTorch installation completed successfully!

Scenario 2: Upgrading CUDA and PyTorch

# Auto-install newer CUDA version
python torch_installer.py --auto-install-cuda --cuda-version 12.1

# Then reinstall PyTorch
python torch_installer.py --force-reinstall

Scenario 3: Troubleshooting Installation

# Check current setup
python torch_installer.py --show-versions

# See GPU compatibility
python torch_installer.py --gpu-info

# Test what would be installed
python torch_installer.py --dry-run

Scenario 4: Development Environment

# Install with logging for debugging
python torch_installer.py --log

# Check CUDA matching logic
python torch_installer.py --show-matching

๐Ÿง  Intelligent Features

Smart CUDA Version Matching

The installer automatically matches your CUDA version to compatible PyTorch versions:

๐Ÿ” Detected CUDA: 11.1
๐Ÿ“‹ Supported versions: ['121', '118', '117', '116', '113']
โš ๏ธ Fallback match: CUDA 111 -> PyTorch cu113 (oldest supported)
โœ… Would install: PyTorch 2.0.1 with CUDA 111
๐Ÿ“ฆ Full package set: torch=2.0.1, torchvision=0.15.2, torchaudio=2.0.2

GPU-Specific Recommendations

For older GPUs:

๐Ÿ’ก GPU ACCELERATION UPGRADE GUIDE (GeForce GT 710):
   โš ๏ธ Your GeForce GT 710 is an older GPU with limited CUDA support
   ๐Ÿ’ก Recommended: CUDA 11.8 for optimal compatibility
   ๐Ÿค– AUTOMATIC INSTALLATION AVAILABLE:
   โ€ข Run: python torch_installer.py --auto-install-cuda

For modern GPUs:

๐Ÿ’ก GPU ACCELERATION UPGRADE GUIDE (GeForce RTX 3080):
   ๐Ÿš€ Your GeForce RTX 3080 supports modern CUDA versions
   โœจ Recommended: CUDA 12.1 for best performance
   ๐Ÿค– AUTOMATIC INSTALLATION AVAILABLE:
   โ€ข Run: python torch_installer.py --auto-install-cuda

๐Ÿ” System Information Display

Complete Ecosystem View

python torch_installer.py --show-versions

# Output:
# ๐Ÿ“Š Installed PyTorch Ecosystem:
#    ๐Ÿ”ฅ PyTorch: 2.8.0+cu121
#    ๐Ÿ‘๏ธ TorchVision: 0.23.0+cu121
#    ๐Ÿ”Š TorchAudio: 2.8.0+cu121
#    ๐ŸŽฏ CUDA Support: True
#    ๐Ÿš€ CUDA Version: 12.1
#    ๐ŸŽฎ GPU Count: 1
#    ๐ŸŽฎ GPU 0: GeForce RTX 3080

GPU Compatibility Analysis

python torch_installer.py --gpu-info

# Output:
# ๐ŸŽฎ GPU and CUDA Compatibility Information
# ๐ŸŽฎ Detected GPU: GeForce RTX 3080
# ๐Ÿ’พ GPU Memory: 10240MB
# ๐Ÿ” Detected CUDA: 12.1
# โœ… Latest PyTorch supports your CUDA via cu121

๐Ÿค– CUDA Auto-Installation (Windows)

Prerequisites

  • Windows 10/11
  • NVIDIA GPU with compatible drivers
  • Package manager: winget (built-in) or chocolatey

Installation Process

  1. Detection: Identifies your GPU model and current CUDA version
  2. Recommendation: Suggests optimal CUDA version for your hardware
  3. Package Manager Check: Verifies winget or chocolatey availability
  4. Version Matching: Finds compatible CUDA version in repositories
  5. Installation: Automatically downloads and installs CUDA
  6. Verification: Confirms successful installation

Example Output

python torch_installer.py --auto-install-cuda

# ๐Ÿค– CUDA Auto-Installation Mode
# ๐ŸŽฎ Detected GPU: GeForce RTX 3080
# ๐Ÿ“‹ Current CUDA: 11.8
# ๐Ÿ”ง Attempting to install CUDA 12.1 for GeForce RTX 3080
# ๐Ÿ“ฆ Trying winget (Windows Package Manager)...
# โœ… Found CUDA versions in winget: 13.0, 12.9, 12.1...
# ๐Ÿ”ง Installing CUDA 12.1 via winget...
# โœ… Successfully installed CUDA 12.1
# ๐Ÿ”„ Please restart your command prompt and run the installer again

๐Ÿ”ง Advanced Configuration

Environment Variables

  • CUDA_HOME: Override CUDA installation path detection
  • PYTORCH_CUDA_ALLOC_CONF: Configure CUDA memory allocation

Custom Package Managers

The installer supports:

  • winget: Native Windows package manager (recommended)
  • chocolatey: Third-party package manager with more versions

Offline Installation

For air-gapped environments:

  1. Download PyTorch wheels manually from https://pytorch.org/get-started/locally/
  2. Use pip install with local wheel files
  3. Run installer with --show-versions to verify

๐Ÿ› Troubleshooting

Common Issues

"CUDA not available" after installation

# Check CUDA installation
nvidia-smi

# Verify PyTorch CUDA support
python -c "import torch; print(torch.cuda.is_available())"

# Reinstall with force
python torch_installer.py --force-reinstall

Package manager not found (Windows)

# Install chocolatey
Set-ExecutionPolicy Bypass -Scope Process -Force
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

# Or update Windows for winget (Windows 10)
# winget is included in Windows 11 by default

Older CUDA version detected

# Check what would be installed
python torch_installer.py --show-matching

# Auto-upgrade CUDA (Windows)
python torch_installer.py --auto-install-cuda

# Or force specific PyTorch version
python torch_installer.py --force-cuda cu118

Debug Mode

# Enable detailed logging
python torch_installer.py --log --dry-run

# Check system compatibility
python torch_installer.py --gpu-info --show-versions

๐Ÿ”„ Update & Maintenance

Updating PyTorch

# Check for updates and reinstall
python torch_installer.py --force-reinstall

# Upgrade to specific version
python torch_installer.py --force-cuda cu121 --force-reinstall

Updating CUDA (Windows)

# Auto-install latest compatible version
python torch_installer.py --auto-install-cuda

# Install specific version
python torch_installer.py --auto-install-cuda --cuda-version 12.1

๐Ÿค Contributing

Reporting Issues

When reporting issues, please include:

# System information
python torch_installer.py --gpu-info --show-versions --log

# Attach the generated log file

Feature Requests

  • GPU support for additional vendors
  • Package manager support for other platforms
  • Integration with conda/mamba environments

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • NVIDIA for CUDA toolkit and GPU drivers
  • PyTorch Team for the excellent deep learning framework
  • Microsoft for winget package manager
  • Chocolatey community for package management on Windows

๐Ÿ“ž Support

For support and questions:

  • ๐Ÿ“ง Create an issue on GitHub
  • ๐Ÿ’ฌ Join the discussion in GitHub Discussions
  • ๐Ÿ“– Check the troubleshooting section above

Happy Deep Learning! ๐Ÿš€๐Ÿ”ฅ

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_installer_coff33ninja-1.0.3.tar.gz (36.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torch_installer_coff33ninja-1.0.3-py3-none-any.whl (17.7 kB view details)

Uploaded Python 3

File details

Details for the file torch_installer_coff33ninja-1.0.3.tar.gz.

File metadata

File hashes

Hashes for torch_installer_coff33ninja-1.0.3.tar.gz
Algorithm Hash digest
SHA256 6dc40947e9a307519efabca336143cece61793a670154ce940a57db5ef0f4cc6
MD5 4a17f09788eeffc467cbabb003b15157
BLAKE2b-256 723feca3ba4be262a341344bb0907a42a7fad1de4d139b7fdbf2cf10acb6b2ef

See more details on using hashes here.

File details

Details for the file torch_installer_coff33ninja-1.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for torch_installer_coff33ninja-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 444846664544e7867c97d7dbcdfae2739dffdf34f2924a39ae539676420aed29
MD5 2d0ae2344c64ab90b1c5d66254558ac2
BLAKE2b-256 13111f8519bb49651cee4d7ef69cdfb382d9bc616e18c763b9506d4785b1e352

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page