Skip to main content

MCP server for executing Python code on Kaggle GPU runtimes (T4/P100/TPU)

Project description

mcp-server-kaggle-exec

MCP server that executes Python code on Kaggle GPU runtimes (T4 x2, P100, TPU) from any MCP-compatible AI assistant — Claude Code, Claude Desktop, Gemini CLI, Cline, and others. Run GPU-accelerated code (CUDA, PyTorch, TensorFlow) without local GPU hardware using Kaggle's free 30hr/week GPU quota.

Prerequisites

  • Python 3.10+
  • A Kaggle account
  • Kaggle API credentials: either KAGGLE_API_TOKEN env var (KGAT_* token) or ~/.kaggle/kaggle.json (see Authentication)

Installation

pip install mcp-server-kaggle-exec

Or run directly with uvx:

uvx mcp-server-kaggle-exec

Configuration

Claude Code

Add to your project's .mcp.json or ~/.claude/.mcp.json:

{
  "mcpServers": {
    "kaggle-exec": {
      "command": "mcp-server-kaggle-exec"
    }
  }
}

Or via the CLI:

claude mcp add kaggle-exec mcp-server-kaggle-exec

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "kaggle-exec": {
      "command": "mcp-server-kaggle-exec"
    }
  }
}

Gemini CLI

gemini mcp add kaggle-exec -- mcp-server-kaggle-exec

Tools

kaggle_execute

Execute inline Python code on a Kaggle GPU kernel.

Parameter Type Default Description
code string Python code to execute (required)
enable_gpu bool true Whether to request GPU acceleration
timeout int 600 Max wait time in seconds

Returns JSON with stdout, stderr, status, output_files, and execution_time.

kaggle_execute_file

Execute a local .py file on a Kaggle GPU kernel.

Parameter Type Default Description
file_path string Path to a local .py file (required)
enable_gpu bool true Whether to request GPU acceleration
timeout int 600 Max wait time in seconds

kaggle_execute_notebook

Execute code and download all generated output files (images, models, CSVs, etc.).

Parameter Type Default Description
code string Python code to execute (required)
output_dir string Local directory for downloaded artifacts (required)
enable_gpu bool true Whether to request GPU acceleration
timeout int 600 Max wait time in seconds

Output files are downloaded to output_dir. To save files for download, write them to the current directory in your Kaggle code.

Examples

Check GPU availability:

kaggle_execute(code="import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))")

Run nvidia-smi:

kaggle_execute(code="import subprocess; print(subprocess.run(['nvidia-smi'], capture_output=True, text=True).stdout)")

Train a model and download weights:

kaggle_execute_notebook(
    code="import torch; model = torch.nn.Linear(10, 1); torch.save(model.state_dict(), 'model.pt')",
    output_dir="./outputs"
)

CPU-only execution (faster startup):

kaggle_execute(code="print('Hello from Kaggle!')", enable_gpu=False)

How It Works

Unlike Google Colab (which uses real-time WebSocket execution), Kaggle uses batch execution:

  1. Your code is pushed as a private Kaggle kernel
  2. Kaggle queues and runs the kernel (30-120s startup + execution time)
  3. Once complete, the output log and files are downloaded
  4. The kernel is cleaned up (left as private)

This means there's no streaming output — you get results only after execution completes.

Authentication

Two authentication methods are supported:

Option 1: KGAT_* Access Token (recommended)

Kaggle API v2 access tokens (KGAT_* format) work via environment variable:

export KAGGLE_API_TOKEN=KGAT_your_token_here

To get a token: go to kaggle.com/settingsAPICreate New Access Token.

When using with MCP, pass the env var in your server config:

{
  "mcpServers": {
    "kaggle-exec": {
      "command": "mcp-server-kaggle-exec",
      "env": {
        "KAGGLE_API_TOKEN": "KGAT_your_token_here"
      }
    }
  }
}

Option 2: Legacy kaggle.json

Place your Kaggle API key at ~/.kaggle/kaggle.json:

  1. Go to kaggle.com/settings
  2. Scroll to API section
  3. Click Create New Token
  4. Move the downloaded kaggle.json to ~/.kaggle/kaggle.json
  5. Set permissions: chmod 600 ~/.kaggle/kaggle.json

GPU Quota

Kaggle provides ~30 hours of free GPU per week. The API supports enable_gpu: true/false but does not allow selecting specific GPU types (T4 vs P100) — Kaggle assigns the GPU automatically.

For CPU-only tasks, set enable_gpu=False to avoid consuming GPU quota.

Troubleshooting

"Kaggle authentication failed" — Ensure either KAGGLE_API_TOKEN env var is set (KGAT_* token) or ~/.kaggle/kaggle.json exists. See Authentication above.

"Kernel timed out" — Increase the timeout parameter. Kaggle kernel startup can take 30-120 seconds, plus execution time.

"GPU quota exceeded" — You've used your ~30hr weekly GPU quota. Wait for the weekly reset or use enable_gpu=False for CPU-only execution.

"Kernel status: error" — Check the stderr in the response for Python errors in your code.

Comparison with mcp-server-colab-exec

Aspect colab-exec kaggle-exec
Execution model Real-time (WebSocket) Batch (push + poll)
Startup time ~10-30s ~30-120s
Auth Google OAuth2 (browser) API token file
GPU types T4, L4 T4 x2, P100, TPU
GPU selection Can pick T4/L4 API only supports on/off
Free quota Usage-based ~30 hr/week
Output Streaming After completion

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_kaggle_exec-0.1.0.tar.gz (15.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_kaggle_exec-0.1.0-py3-none-any.whl (10.4 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_kaggle_exec-0.1.0.tar.gz.

File metadata

  • Download URL: mcp_server_kaggle_exec-0.1.0.tar.gz
  • Upload date:
  • Size: 15.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mcp_server_kaggle_exec-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9dd9d836b5b3c8f11e4fa0b31fd84029abf9b0d630281cacb51d239e076d35f9
MD5 cef86299693ec0ae091e387e90cadaa4
BLAKE2b-256 90014738916b8ab3f659b8da1ffbc2208ea097b1bac15b4a9b563900b2e8f463

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcp_server_kaggle_exec-0.1.0.tar.gz:

Publisher: publish.yml on pdwi2020/mcp-server-kaggle-exec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mcp_server_kaggle_exec-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_kaggle_exec-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 decb3ac41edade22e711f3fe0b7e8a6640be19b484717f1155ae8909c6c84947
MD5 9adbe100d2345178fc335f230327a7f8
BLAKE2b-256 bcdd75eaa4266d5f225e7a28f42048bc2d41ab717a9615ebdaf48aca7e03ce19

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcp_server_kaggle_exec-0.1.0-py3-none-any.whl:

Publisher: publish.yml on pdwi2020/mcp-server-kaggle-exec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page