MCP server for executing Python code on Kaggle GPU runtimes (T4/P100/TPU)
Project description
mcp-server-kaggle-exec
MCP server that executes Python code on Kaggle GPU runtimes (T4 x2, P100, TPU) from any MCP-compatible AI assistant — Claude Code, Claude Desktop, Gemini CLI, Cline, and others. Run GPU-accelerated code (CUDA, PyTorch, TensorFlow) without local GPU hardware using Kaggle's free 30hr/week GPU quota.
Prerequisites
- Python 3.10+
- A Kaggle account
- Kaggle API credentials: either
KAGGLE_API_TOKENenv var (KGAT_* token) or~/.kaggle/kaggle.json(see Authentication)
Installation
pip install mcp-server-kaggle-exec
Or run directly with uvx:
uvx mcp-server-kaggle-exec
Configuration
Claude Code
Add to your project's .mcp.json or ~/.claude/.mcp.json:
{
"mcpServers": {
"kaggle-exec": {
"command": "mcp-server-kaggle-exec"
}
}
}
Or via the CLI:
claude mcp add kaggle-exec mcp-server-kaggle-exec
Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"kaggle-exec": {
"command": "mcp-server-kaggle-exec"
}
}
}
Gemini CLI
gemini mcp add kaggle-exec -- mcp-server-kaggle-exec
Tools
kaggle_execute
Execute inline Python code on a Kaggle GPU kernel.
| Parameter | Type | Default | Description |
|---|---|---|---|
code |
string | — | Python code to execute (required) |
enable_gpu |
bool | true |
Whether to request GPU acceleration |
timeout |
int | 600 |
Max wait time in seconds |
Returns JSON with stdout, stderr, status, output_files, and execution_time.
kaggle_execute_file
Execute a local .py file on a Kaggle GPU kernel.
| Parameter | Type | Default | Description |
|---|---|---|---|
file_path |
string | — | Path to a local .py file (required) |
enable_gpu |
bool | true |
Whether to request GPU acceleration |
timeout |
int | 600 |
Max wait time in seconds |
kaggle_execute_notebook
Execute code and download all generated output files (images, models, CSVs, etc.).
| Parameter | Type | Default | Description |
|---|---|---|---|
code |
string | — | Python code to execute (required) |
output_dir |
string | — | Local directory for downloaded artifacts (required) |
enable_gpu |
bool | true |
Whether to request GPU acceleration |
timeout |
int | 600 |
Max wait time in seconds |
Output files are downloaded to output_dir. To save files for download, write them to the current directory in your Kaggle code.
Examples
Check GPU availability:
kaggle_execute(code="import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))")
Run nvidia-smi:
kaggle_execute(code="import subprocess; print(subprocess.run(['nvidia-smi'], capture_output=True, text=True).stdout)")
Train a model and download weights:
kaggle_execute_notebook(
code="import torch; model = torch.nn.Linear(10, 1); torch.save(model.state_dict(), 'model.pt')",
output_dir="./outputs"
)
CPU-only execution (faster startup):
kaggle_execute(code="print('Hello from Kaggle!')", enable_gpu=False)
How It Works
Unlike Google Colab (which uses real-time WebSocket execution), Kaggle uses batch execution:
- Your code is pushed as a private Kaggle kernel
- Kaggle queues and runs the kernel (30-120s startup + execution time)
- Once complete, the output log and files are downloaded
- The kernel is cleaned up (left as private)
This means there's no streaming output — you get results only after execution completes.
Authentication
Two authentication methods are supported:
Option 1: KGAT_* Access Token (recommended)
Kaggle API v2 access tokens (KGAT_* format) work via environment variable:
export KAGGLE_API_TOKEN=KGAT_your_token_here
To get a token: go to kaggle.com/settings → API → Create New Access Token.
When using with MCP, pass the env var in your server config:
{
"mcpServers": {
"kaggle-exec": {
"command": "mcp-server-kaggle-exec",
"env": {
"KAGGLE_API_TOKEN": "KGAT_your_token_here"
}
}
}
}
Option 2: Legacy kaggle.json
Place your Kaggle API key at ~/.kaggle/kaggle.json:
- Go to kaggle.com/settings
- Scroll to API section
- Click Create New Token
- Move the downloaded
kaggle.jsonto~/.kaggle/kaggle.json - Set permissions:
chmod 600 ~/.kaggle/kaggle.json
GPU Quota
Kaggle provides ~30 hours of free GPU per week. The API supports enable_gpu: true/false but does not allow selecting specific GPU types (T4 vs P100) — Kaggle assigns the GPU automatically.
For CPU-only tasks, set enable_gpu=False to avoid consuming GPU quota.
Troubleshooting
"Kaggle authentication failed" — Ensure either KAGGLE_API_TOKEN env var is set (KGAT_* token) or ~/.kaggle/kaggle.json exists. See Authentication above.
"Kernel timed out" — Increase the timeout parameter. Kaggle kernel startup can take 30-120 seconds, plus execution time.
"GPU quota exceeded" — You've used your ~30hr weekly GPU quota. Wait for the weekly reset or use enable_gpu=False for CPU-only execution.
"Kernel status: error" — Check the stderr in the response for Python errors in your code.
Comparison with mcp-server-colab-exec
| Aspect | colab-exec | kaggle-exec |
|---|---|---|
| Execution model | Real-time (WebSocket) | Batch (push + poll) |
| Startup time | ~10-30s | ~30-120s |
| Auth | Google OAuth2 (browser) | API token file |
| GPU types | T4, L4 | T4 x2, P100, TPU |
| GPU selection | Can pick T4/L4 | API only supports on/off |
| Free quota | Usage-based | ~30 hr/week |
| Output | Streaming | After completion |
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_server_kaggle_exec-0.1.0.tar.gz.
File metadata
- Download URL: mcp_server_kaggle_exec-0.1.0.tar.gz
- Upload date:
- Size: 15.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9dd9d836b5b3c8f11e4fa0b31fd84029abf9b0d630281cacb51d239e076d35f9
|
|
| MD5 |
cef86299693ec0ae091e387e90cadaa4
|
|
| BLAKE2b-256 |
90014738916b8ab3f659b8da1ffbc2208ea097b1bac15b4a9b563900b2e8f463
|
Provenance
The following attestation bundles were made for mcp_server_kaggle_exec-0.1.0.tar.gz:
Publisher:
publish.yml on pdwi2020/mcp-server-kaggle-exec
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_server_kaggle_exec-0.1.0.tar.gz -
Subject digest:
9dd9d836b5b3c8f11e4fa0b31fd84029abf9b0d630281cacb51d239e076d35f9 - Sigstore transparency entry: 953572723
- Sigstore integration time:
-
Permalink:
pdwi2020/mcp-server-kaggle-exec@098edaae7757ae8de542f38d909cd7e0203f3d82 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/pdwi2020
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@098edaae7757ae8de542f38d909cd7e0203f3d82 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mcp_server_kaggle_exec-0.1.0-py3-none-any.whl.
File metadata
- Download URL: mcp_server_kaggle_exec-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
decb3ac41edade22e711f3fe0b7e8a6640be19b484717f1155ae8909c6c84947
|
|
| MD5 |
9adbe100d2345178fc335f230327a7f8
|
|
| BLAKE2b-256 |
bcdd75eaa4266d5f225e7a28f42048bc2d41ab717a9615ebdaf48aca7e03ce19
|
Provenance
The following attestation bundles were made for mcp_server_kaggle_exec-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on pdwi2020/mcp-server-kaggle-exec
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_server_kaggle_exec-0.1.0-py3-none-any.whl -
Subject digest:
decb3ac41edade22e711f3fe0b7e8a6640be19b484717f1155ae8909c6c84947 - Sigstore transparency entry: 953572725
- Sigstore integration time:
-
Permalink:
pdwi2020/mcp-server-kaggle-exec@098edaae7757ae8de542f38d909cd7e0203f3d82 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/pdwi2020
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@098edaae7757ae8de542f38d909cd7e0203f3d82 -
Trigger Event:
push
-
Statement type: