Develop C++/CUDA extensions with PyTorch like Python scripts
Project description
CharonLoad
CharonLoad is a bridge between Python code and rapidly developed custom C++/CUDA extensions to make writing high-performance research code with PyTorch easy:
- 🔥 PyTorch C++ API detection and linking
- 🔨 Automatic just-in-time (JIT) compilation of the C++/CUDA part
- 📦 Cached incremental builds and automatic clean builds
- 🔗 Full power of CMake for handling C++ dependencies
- ⌨️ Python stub file generation for syntax highlighting and auto-completion in VS Code
- 🐛 Interactive mixed Python/C++ debugging support in VS Code via Python C++ Debugger extension
CharonLoad reduces the burden to start writing and experimenting with custom GPU kernels in PyTorch by getting complex boilerplate code and common pitfalls out of your way. Developing C++/CUDA code with CharonLoad feels similar to writing Python scripts and lets you follow the same familiar workflow.
Installation
CharonLoad requires Python >=3.8 and can be installed from PyPI:
pip install charonload
Quick Start
CharonLoad only requires minimal changes to existing projects. In particular, a small configuration of the C++/CUDA project is added in the Python part while the CMake and C++ part should adopt some predefined functions:
-
<your_project>/main.py
import charonload VSCODE_STUBS_DIRECTORY = pathlib.Path(__file__).parent / "typings" charonload.module_config["my_cpp_cuda_ext"] = charonload.Config( project_directory=pathlib.Path(__file__).parent / "<my_cpp_cuda_ext>", build_directory="custom/build/directory", # optional stubs_directory=VSCODE_STUBS_DIRECTORY, # optional ) import other_module
-
<your_project>/other_module.py
import my_cpp_cuda_ext # JIT compiles and loads the extension tensor_from_ext = my_cpp_cuda_ext.generate_tensor()
-
<your_project>/<my_cpp_cuda_ext>/CMakeLists.txt
find_package(charonload) if(charonload_FOUND) charonload_add_torch_library(${TORCH_EXTENSION_NAME} MODULE) target_sources(${TORCH_EXTENSION_NAME} PRIVATE src/<my_bindings>.cpp) # Further properties, e.g. link against other 3rd-party libraries, etc. # ... endif()
-
<your_project>/<my_cpp_cuda_ext>/src/<my_bindings>.cpp
#include <torch/python.h> torch::Tensor generate_tensor(); // Implemented somewhere in <my_cpp_cuda_ext> PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("generate_tensor", &generate_tensor, "Optional Python docstring"); }
Contributing
If you would like to contribute to CharonLoad, you can find more information in the Contributing guide.
License
MIT
Contact
Patrick Stotko - stotko@cs.uni-bonn.de
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for charonload-0.1.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2c491de3e12f7452ba9d25c49bd8087fb860a2b8e14ddc3de1fec9910d2b552a |
|
MD5 | 84513553a8538bd7be5367c40759a76d |
|
BLAKE2b-256 | 2962fc550bc92ef118a57d60e927f81b695a96589567f33afcd7c3035ea9a107 |