Skip to main content

MCP server for executing Python code on Google Colab GPU runtimes (T4/L4)

Project description

mcp-server-colab-exec

MCP server that allocates Google Colab GPU runtimes (T4/L4) and executes Python code on them. Lets any MCP-compatible AI assistant — Claude Code, Claude Desktop, Gemini CLI, Cline, and others — run GPU-accelerated code (CUDA, PyTorch, TensorFlow) without local GPU hardware.

Prerequisites

  • Python 3.10+
  • A Google account with access to Google Colab
  • On first run, a browser window opens for OAuth2 consent. The token is cached at ~/.config/colab-exec/token.json for subsequent runs.

Installation

pip install mcp-server-colab-exec

Or run directly with uvx:

uvx mcp-server-colab-exec

Configuration

Claude Code

Add to your project's .mcp.json or ~/.claude/.mcp.json:

{
  "mcpServers": {
    "colab-exec": {
      "command": "mcp-server-colab-exec"
    }
  }
}

Or via the CLI:

claude mcp add colab-exec mcp-server-colab-exec

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "colab-exec": {
      "command": "mcp-server-colab-exec"
    }
  }
}

Gemini CLI

gemini mcp add colab-exec -- mcp-server-colab-exec

Tools

colab_execute

Execute inline Python code on a Colab GPU runtime.

Parameter Type Default Description
code string Python code to execute (required)
accelerator string "T4" GPU type: "T4" (free) or "L4" (premium)
timeout int 300 Max execution time in seconds

Returns JSON with per-cell output, errors, and stderr.

colab_execute_file

Execute a local .py file on a Colab GPU runtime.

Parameter Type Default Description
file_path string Path to a local .py file (required)
accelerator string "T4" GPU type: "T4" (free) or "L4" (premium)
timeout int 300 Max execution time in seconds

colab_execute_notebook

Execute code and collect all generated artifacts (images, CSVs, models, etc.).

Parameter Type Default Description
code string Python code to execute (required)
output_dir string Local directory for downloaded artifacts (required)
accelerator string "T4" GPU type: "T4" (free) or "L4" (premium)
timeout int 300 Max execution time in seconds

Artifacts are downloaded as a zip and extracted into output_dir.

Examples

Check GPU availability:

colab_execute(code="import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))")

Run nvidia-smi:

colab_execute(code="import subprocess; print(subprocess.run(['nvidia-smi'], capture_output=True, text=True).stdout)")

Train a model and download weights:

colab_execute_notebook(
    code="import torch; model = torch.nn.Linear(10, 1); torch.save(model.state_dict(), '/tmp/model.pt')",
    output_dir="./outputs"
)

Authentication

On first use, the server opens a browser window for Google OAuth2 consent. The access token and refresh token are cached at ~/.config/colab-exec/token.json. Subsequent runs use the cached token and refresh it automatically.

The OAuth2 client credentials are the same ones used by the official Google Colab VS Code extension (google.colab@0.3.0). They are intentionally public.

Troubleshooting

"GPU quota exceeded" — Colab has usage limits. Wait and retry, or use a different Google account.

"Timed out creating kernel session" — The runtime took too long to start. Retry — Colab sometimes has delays during peak usage.

"Authentication failed" — Delete ~/.config/colab-exec/token.json and re-authenticate.

OAuth browser window doesn't open — Ensure you're running in an environment with a browser. For headless servers, authenticate on a machine with a browser first and copy the token file.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_colab_exec-0.1.0.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_colab_exec-0.1.0-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_colab_exec-0.1.0.tar.gz.

File metadata

  • Download URL: mcp_server_colab_exec-0.1.0.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mcp_server_colab_exec-0.1.0.tar.gz
Algorithm Hash digest
SHA256 91fc8b31bcf7f97bf4da2d9985d637493e86a7485d62eb4b122e1a4bfff0f8af
MD5 a3285977023b20995a695a8d9c62dd91
BLAKE2b-256 0307603970c0fea80f3a77e6185b13a387d27a198c7defcec9380fdd60b83faf

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcp_server_colab_exec-0.1.0.tar.gz:

Publisher: publish.yml on pdwi2020/mcp-server-colab-exec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mcp_server_colab_exec-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_colab_exec-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 89db8624acdf0464940afdc1553cd8564908de9c939ba6e217be10ef2e9b1696
MD5 49344942daa4b83ae0c8bee2e845a2df
BLAKE2b-256 ac26ac463a65faf3f07618e1aed8eec69435cc01191c5630e26ef2dc1d182f1e

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcp_server_colab_exec-0.1.0-py3-none-any.whl:

Publisher: publish.yml on pdwi2020/mcp-server-colab-exec

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page