Skip to main content

TileGym

Project description

English | 简体中文 | 繁體中文 | 日本語 | Français

TileGym

TileGym is a CUDA Tile kernel library that provides a rich collection of kernel tutorials and examples for tile-based GPU programming.

Overview | Features | Installation | Quick Start | Contributing | License

Overview

This repository aims to provide helpful kernel tutorials and examples for tile-based GPU programming. TileGym is a playground for experimenting with CUDA Tile, where you can learn how to build efficient GPU kernels and explore their integration into real-world large language models such as Llama 3.1 and DeepSeek V2. Whether you're learning tile-based GPU programming or looking to optimize your LLM implementations, TileGym offers practical examples and comprehensive guidance. tilegym_1_newyear

Features

  • Rich collection of CUDA Tile kernel examples
  • Practical kernel implementations for common deep learning operators
  • Performance benchmarking to evaluate kernel efficiency
  • End-to-end integration examples with popular LLMs (Llama 3.1, DeepSeek V2)

Installation

Prerequisites

⚠️ Important: TileGym requires CUDA 13.1+ and NVIDIA Blackwell architecture GPUs (e.g., B200, RTX 5080, RTX 5090). We will support other GPU architectures in the future. Download CUDA from NVIDIA CUDA Downloads.

  • PyTorch (version 2.9.1 or compatible)
  • CUDA 13.1+ (Required - TileGym is built and tested exclusively on CUDA 13.1+)
  • Triton (included with PyTorch installation)

Setup Steps

1. Prepare torch and triton environment

If you already have torch and triton, skip this step.

pip install --pre torch --index-url https://download.pytorch.org/whl/cu130

We have verified that torch==2.9.1 works. You can also get triton packages when installing torch.

2. Install TileGym

git clone https://github.com/NVIDIA/TileGym.git
cd TileGym

TileGym uses cuda-tile for GPU kernel programming, which depends on the tileiras compiler at runtime. Choose one of the following options depending on your environment:

  • Option A — Bundled tileiras via pip (recommended for most users):

    pip install .[tileiras]
    

    This installs TileGym along with cuda-tile[tileiras], which bundles the tileiras compiler directly into your Python environment. No separate compiler installation is needed.

  • Option B — System tileiras:

    pip install .
    

    Use this if you already have tileiras available on your system (e.g., from CUDA Toolkit 13.1+).

Then, install cuda-tile-experimental:

⚠️ Required: TileGym kernels use features from cuda-tile-experimental (e.g., the autotuner). This package is not available on PyPI and must be installed separately from source:

pip install "cuda-tile-experimental @ git+https://github.com/NVIDIA/cutile-python.git#subdirectory=experimental"

cuda-tile-experimental is maintained by the CUDA Tile team as a source-only experimental package. See more details in experimental-features-optional.

All runtime dependencies (except cuda-tile-experimental) are declared in requirements.txt and are installed automatically by pip install .. You can also pre-install them with pip install -r requirements.txt if you prefer an explicit step (this installs cuda-tile without the bundled tileiras compiler).

For editable (development) mode, use pip install -e . or pip install -e .[tileiras].

We also provide Dockerfile, you can refer to modeling/transformers/README.md.

Quick Start

There are three main ways to use TileGym:

1. Explore Kernel Examples

All kernel implementations are located in the src/tilegym/ops/ directory. You can test individual operations with minimal scripts. Function-level usage and minimal scripts for individual ops are documented in tests/ops/README.md

2. Run Benchmarks

Evaluate kernel performance with micro-benchmarks:

cd tests/benchmark
bash run_all.sh

Complete benchmark guide available in tests/benchmark/README.md

3. Run LLM Transformer Examples

Use TileGym kernels in end-to-end inference scenarios. We provide runnable scripts and instructions for transformer language models (e.g., Llama 3.1-8B) accelerated using TileGym kernels.

First, install the additional dependency:

pip install accelerate==1.13.0 --no-deps

Containerized Setup (Docker):

docker build -t tilegym-transformers -f modeling/transformers/Dockerfile .
docker run --gpus all -it tilegym-transformers bash

More details in modeling/transformers/README.md

4. Julia (cuTile.jl) Kernels (Optional)

TileGym also includes experimental cuTile.jl kernel implementations in Julia. These are self-contained in the julia/ directory and do not require the Python TileGym package.

Prerequisites: Julia 1.12+, CUDA 13.1, Blackwell GPU

# Install Julia (if not already installed)
curl -fsSL https://install.julialang.org | sh

# Install dependencies
julia --project=julia/ -e 'using Pkg; Pkg.instantiate()'

# Run tests
julia --project=julia/ julia/test/runtests.jl

See julia/Project.toml for the full dependency list.

Contributing

We welcome contributions of all kinds. Please read our CONTRIBUTING.md for guidelines, including the Contributor License Agreement (CLA) process.

License and third-party notices

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tilegym-1.1.0-py3-none-any.whl (150.7 kB view details)

Uploaded Python 3

File details

Details for the file tilegym-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: tilegym-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 150.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for tilegym-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cd582510bd3b3ac292d2e039e96e78b248fe0d04456f04a2c9ba1dd8902f8d22
MD5 7da1200366ef6685de2b9591d4c4ad52
BLAKE2b-256 bd83a7a59b947d045ed64a38250d7a669aeee64aeff3a8c284550460ba770d33

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page