An intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration
Project description
๐ PyTorch Installation Assistant
An intelligent, autonomous PyTorch installer that automatically detects your system, GPU, and CUDA configuration to install the optimal PyTorch setup for your hardware.
โจ Features
- ๐ง Intelligent GPU Detection: Automatically detects NVIDIA, AMD, and Apple Silicon GPUs
- ๐ฏ Smart CUDA Matching: Finds the best PyTorch version for your CUDA installation
- ๐ค Autonomous CUDA Installation: Automatically installs CUDA on Windows using package managers
- ๐ฆ Complete Ecosystem: Installs torch, torchvision, and torchaudio with version compatibility
- ๐ Fallback Logic: Handles older CUDA versions and compatibility issues gracefully
- ๐ฎ Hardware-Specific Optimization: Tailored recommendations for different GPU generations
- ๐ Comprehensive Testing: Post-install verification with tensor operations
- ๐ Detailed Reporting: Shows complete system and package information
โ ๏ธ GPU Compatibility Notice
Testing Status: This installer has been primarily tested on GT 900 series and older GPUs, as well as GTX 10 series cards. While it should work with newer GPU generations (RTX 20/30/40 series), comprehensive testing across all NVIDIA GPU models is ongoing.
If you encounter issues with newer GPUs, please report them via GitHub issues to help improve compatibility.
๏ฟฝ๏ธ Installation
Simply download the torch_installer.py script - no additional dependencies required beyond Python's standard library.
# Download the script
curl -O https://raw.githubusercontent.com/coff33ninja/torch-installer/main/torch_installer.py
# Or clone the repository
git clone https://github.com/coff33ninja/torch-installer.git
cd torch-installer/pytorch-installer.git
๐ Quick Start
Basic Installation
# Automatic installation with smart detection
python torch_installer.py
# CPU-only installation
python torch_installer.py --cpu-only
# Force specific CUDA version
python torch_installer.py --force-cuda cu121
CUDA Auto-Installation (Windows Only)
# Auto-install recommended CUDA version
python torch_installer.py --auto-install-cuda
# Install specific CUDA version
python torch_installer.py --auto-install-cuda --cuda-version 12.1
# Dry-run to see what would be installed
python torch_installer.py --auto-install-cuda --dry-run
๏ฟฝ Commnand Reference
Core Installation Commands
| Command | Description | Example |
|---|---|---|
python torch_installer.py |
Auto-detect and install optimal PyTorch | Basic usage |
--cpu-only |
Force CPU-only installation | python torch_installer.py --cpu-only |
--force-cuda cu121 |
Force specific CUDA version | python torch_installer.py --force-cuda cu121 |
--force-reinstall |
Reinstall even if PyTorch exists | python torch_installer.py --force-reinstall |
CUDA Management (Windows)
| Command | Description | Example |
|---|---|---|
--auto-install-cuda |
Automatically install CUDA | python torch_installer.py --auto-install-cuda |
--cuda-version 12.1 |
Specify CUDA version to install | python torch_installer.py --auto-install-cuda --cuda-version 12.1 |
Information & Diagnostics
| Command | Description | Example |
|---|---|---|
--gpu-info |
Show GPU and CUDA compatibility | python torch_installer.py --gpu-info |
--show-versions |
Display installed PyTorch ecosystem | python torch_installer.py --show-versions |
--show-matching |
Demo CUDA version matching logic | python torch_installer.py --show-matching |
--list-cuda |
List supported CUDA versions | python torch_installer.py --list-cuda |
Development & Testing
| Command | Description | Example |
|---|---|---|
--dry-run |
Show commands without executing | python torch_installer.py --dry-run |
--log |
Log all output to timestamped file | python torch_installer.py --log |
๐ฎ GPU Support Matrix
NVIDIA GPUs
| GPU Generation | Recommended CUDA | PyTorch Support | Performance |
|---|---|---|---|
| RTX 40 Series | CUDA 12.1+ | โ Excellent | ๐ฅ๐ฅ๐ฅ๐ฅ๐ฅ |
| RTX 30 Series | CUDA 12.1+ | โ Excellent | ๐ฅ๐ฅ๐ฅ๐ฅ๐ฅ |
| RTX 20 Series | CUDA 11.8+ | โ Excellent | ๐ฅ๐ฅ๐ฅ๐ฅ |
| GTX 16 Series | CUDA 11.8+ | โ Very Good | ๐ฅ๐ฅ๐ฅ๐ฅ |
| GTX 10 Series | CUDA 11.8+ | โ Good | ๐ฅ๐ฅ๐ฅ |
| GT 700 Series | CUDA 11.8 | โ ๏ธ Limited | ๐ฅ๐ฅ |
| Older GPUs | Manual Install | โ Not Recommended | ๐ฅ |
Other GPUs
| GPU Type | Support | Recommendation |
|---|---|---|
| Apple Silicon (M1/M2/M3) | โ MPS Support | Automatic detection |
| AMD GPUs | โ ๏ธ ROCm (Linux only) | Manual ROCm installation |
| Intel GPUs | โ Not supported | Use CPU-only mode |
๐ง Usage Examples
Scenario 1: First-time Installation
# Let the installer detect everything automatically
python torch_installer.py
# Output example:
# ๐ PyTorch Installation Assistant
# ๐ฎ Detected GPU: GeForce RTX 3080
# ๐ Detected CUDA version: 12.1
# ๐ฏ Installing PyTorch with CUDA 121 wheels
# โ
PyTorch installation completed successfully!
Scenario 2: Upgrading CUDA and PyTorch
# Auto-install newer CUDA version
python torch_installer.py --auto-install-cuda --cuda-version 12.1
# Then reinstall PyTorch
python torch_installer.py --force-reinstall
Scenario 3: Troubleshooting Installation
# Check current setup
python torch_installer.py --show-versions
# See GPU compatibility
python torch_installer.py --gpu-info
# Test what would be installed
python torch_installer.py --dry-run
Scenario 4: Development Environment
# Install with logging for debugging
python torch_installer.py --log
# Check CUDA matching logic
python torch_installer.py --show-matching
๐ง Intelligent Features
Smart CUDA Version Matching
The installer automatically matches your CUDA version to compatible PyTorch versions:
๐ Detected CUDA: 11.1
๐ Supported versions: ['121', '118', '117', '116', '113']
โ ๏ธ Fallback match: CUDA 111 -> PyTorch cu113 (oldest supported)
โ
Would install: PyTorch 2.0.1 with CUDA 111
๐ฆ Full package set: torch=2.0.1, torchvision=0.15.2, torchaudio=2.0.2
GPU-Specific Recommendations
For older GPUs:
๐ก GPU ACCELERATION UPGRADE GUIDE (GeForce GT 710):
โ ๏ธ Your GeForce GT 710 is an older GPU with limited CUDA support
๐ก Recommended: CUDA 11.8 for optimal compatibility
๐ค AUTOMATIC INSTALLATION AVAILABLE:
โข Run: python torch_installer.py --auto-install-cuda
For modern GPUs:
๐ก GPU ACCELERATION UPGRADE GUIDE (GeForce RTX 3080):
๐ Your GeForce RTX 3080 supports modern CUDA versions
โจ Recommended: CUDA 12.1 for best performance
๐ค AUTOMATIC INSTALLATION AVAILABLE:
โข Run: python torch_installer.py --auto-install-cuda
๐ System Information Display
Complete Ecosystem View
python torch_installer.py --show-versions
# Output:
# ๐ Installed PyTorch Ecosystem:
# ๐ฅ PyTorch: 2.8.0+cu121
# ๐๏ธ TorchVision: 0.23.0+cu121
# ๐ TorchAudio: 2.8.0+cu121
# ๐ฏ CUDA Support: True
# ๐ CUDA Version: 12.1
# ๐ฎ GPU Count: 1
# ๐ฎ GPU 0: GeForce RTX 3080
GPU Compatibility Analysis
python torch_installer.py --gpu-info
# Output:
# ๐ฎ GPU and CUDA Compatibility Information
# ๐ฎ Detected GPU: GeForce RTX 3080
# ๐พ GPU Memory: 10240MB
# ๐ Detected CUDA: 12.1
# โ
Latest PyTorch supports your CUDA via cu121
๐ค CUDA Auto-Installation (Windows)
Prerequisites
- Windows 10/11
- NVIDIA GPU with compatible drivers
- Package manager: winget (built-in) or chocolatey
Installation Process
- Detection: Identifies your GPU model and current CUDA version
- Recommendation: Suggests optimal CUDA version for your hardware
- Package Manager Check: Verifies winget or chocolatey availability
- Version Matching: Finds compatible CUDA version in repositories
- Installation: Automatically downloads and installs CUDA
- Verification: Confirms successful installation
Example Output
python torch_installer.py --auto-install-cuda
# ๐ค CUDA Auto-Installation Mode
# ๐ฎ Detected GPU: GeForce RTX 3080
# ๐ Current CUDA: 11.8
# ๐ง Attempting to install CUDA 12.1 for GeForce RTX 3080
# ๐ฆ Trying winget (Windows Package Manager)...
# โ
Found CUDA versions in winget: 13.0, 12.9, 12.1...
# ๐ง Installing CUDA 12.1 via winget...
# โ
Successfully installed CUDA 12.1
# ๐ Please restart your command prompt and run the installer again
๐ง Advanced Configuration
Environment Variables
CUDA_HOME: Override CUDA installation path detectionPYTORCH_CUDA_ALLOC_CONF: Configure CUDA memory allocation
Custom Package Managers
The installer supports:
- winget: Native Windows package manager (recommended)
- chocolatey: Third-party package manager with more versions
Offline Installation
For air-gapped environments:
- Download PyTorch wheels manually from https://pytorch.org/get-started/locally/
- Use
pip installwith local wheel files - Run installer with
--show-versionsto verify
๐ Troubleshooting
Common Issues
"CUDA not available" after installation
# Check CUDA installation
nvidia-smi
# Verify PyTorch CUDA support
python -c "import torch; print(torch.cuda.is_available())"
# Reinstall with force
python torch_installer.py --force-reinstall
Package manager not found (Windows)
# Install chocolatey
Set-ExecutionPolicy Bypass -Scope Process -Force
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
# Or update Windows for winget (Windows 10)
# winget is included in Windows 11 by default
Older CUDA version detected
# Check what would be installed
python torch_installer.py --show-matching
# Auto-upgrade CUDA (Windows)
python torch_installer.py --auto-install-cuda
# Or force specific PyTorch version
python torch_installer.py --force-cuda cu118
Debug Mode
# Enable detailed logging
python torch_installer.py --log --dry-run
# Check system compatibility
python torch_installer.py --gpu-info --show-versions
๐ Update & Maintenance
Updating PyTorch
# Check for updates and reinstall
python torch_installer.py --force-reinstall
# Upgrade to specific version
python torch_installer.py --force-cuda cu121 --force-reinstall
Updating CUDA (Windows)
# Auto-install latest compatible version
python torch_installer.py --auto-install-cuda
# Install specific version
python torch_installer.py --auto-install-cuda --cuda-version 12.1
๐ค Contributing
Reporting Issues
When reporting issues, please include:
# System information
python torch_installer.py --gpu-info --show-versions --log
# Attach the generated log file
Feature Requests
- GPU support for additional vendors
- Package manager support for other platforms
- Integration with conda/mamba environments
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- NVIDIA for CUDA toolkit and GPU drivers
- PyTorch Team for the excellent deep learning framework
- Microsoft for winget package manager
- Chocolatey community for package management on Windows
๐ Support
For support and questions:
- ๐ง Create an issue on GitHub
- ๐ฌ Join the discussion in GitHub Discussions
- ๐ Check the troubleshooting section above
Happy Deep Learning! ๐๐ฅ
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file torch_installer_coff33ninja-1.0.3.tar.gz.
File metadata
- Download URL: torch_installer_coff33ninja-1.0.3.tar.gz
- Upload date:
- Size: 36.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6dc40947e9a307519efabca336143cece61793a670154ce940a57db5ef0f4cc6
|
|
| MD5 |
4a17f09788eeffc467cbabb003b15157
|
|
| BLAKE2b-256 |
723feca3ba4be262a341344bb0907a42a7fad1de4d139b7fdbf2cf10acb6b2ef
|
File details
Details for the file torch_installer_coff33ninja-1.0.3-py3-none-any.whl.
File metadata
- Download URL: torch_installer_coff33ninja-1.0.3-py3-none-any.whl
- Upload date:
- Size: 17.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
444846664544e7867c97d7dbcdfae2739dffdf34f2924a39ae539676420aed29
|
|
| MD5 |
2d0ae2344c64ab90b1c5d66254558ac2
|
|
| BLAKE2b-256 |
13111f8519bb49651cee4d7ef69cdfb382d9bc616e18c763b9506d4785b1e352
|