Execute Python on Kaggle's FREE GPUs (T4/P100/H100) from your terminal. Perfect for AI agents.
Project description
KaggleRun
Execute Python on Kaggle's FREE GPUs from your terminal. No browser needed.
Perfect for AI coding agents like Claude Code, Cursor, and Cline.
pip install kagglerun
kagglerun --url <your-kaggle-url> "import torch; print(torch.cuda.get_device_name(0))"
# Output: NVIDIA Tesla T4 (or P100, H100 depending on availability)
Why KaggleRun?
| Feature | KaggleRun | Kaggle Web | Colab | RunPod |
|---|---|---|---|---|
| No browser needed | Yes | No | No | No |
| CLI-native | Yes | No | No | Partial |
| AI agent ready (MCP) | Yes | No | No | No |
| Free GPU (T4/P100/H100) | Yes | Yes | No | No |
| Real-time output | Yes | Yes | Yes | Yes |
| Cost | $0 | $0 | $0-10/mo | $0.50+/hr |
KaggleRun bridges your local terminal to Kaggle's free GPU infrastructure.
Quick Start
1. Get Your Kaggle Jupyter URL
- Go to kaggle.com/code and create a new notebook
- Enable GPU: Settings → Accelerator → GPU T4 x2 or GPU P100
- Click the "..." menu → "Copy VS Code Server URL"
Your URL looks like one of these formats (both work automatically):
https://kkb-production.jupyter-proxy.kaggle.net?token=eyJ...
https://kkb-production.jupyter-proxy.kaggle.net/k/123456/eyJ.../proxy
2. Install & Run
# Install
pip install kagglerun
# Set URL (optional - avoids --url each time)
export KAGGLE_JUPYTER_URL="https://your-kaggle-url-here"
# Execute code
kagglerun "print('Hello from Kaggle GPU!')"
# Run a Python file
kagglerun train.py
# Check GPU
kagglerun --gpu-info
Features
CLI Commands
# Execute inline code
kagglerun "import torch; print(torch.cuda.is_available())"
# Execute a local Python file
kagglerun my_training_script.py
# Get GPU information
kagglerun --gpu-info
# Get system information (Python, PyTorch, CUDA)
kagglerun --system-info
# List files in /kaggle/working/
kagglerun --list-files
# Upload a file
kagglerun --upload data.csv
# Download a file
kagglerun --download results.csv --output ./local_results.csv
# Read a remote file
kagglerun --read output.txt
# Test connection
kagglerun --test
Python API
from kagglerun import KaggleExecutor
# Connect (URL format auto-detected)
executor = KaggleExecutor("https://your-kaggle-url")
# Execute code
result = executor.execute("""
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"Device: {torch.cuda.get_device_name(0)}")
""")
print(result['output_text'])
# Run ML training
result = executor.execute("""
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=10000, n_features=20)
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1)
clf.fit(X, y)
print(f"Score: {clf.score(X, y):.4f}")
""")
# Save file to Kaggle
executor.save_text("config.json", '{"epochs": 10}')
# Download trained model
model_bytes = executor.download_file("model.pkl")
with open("local_model.pkl", "wb") as f:
f.write(model_bytes)
MCP Server (For AI Agents)
KaggleRun includes a Model Context Protocol (MCP) server, allowing AI assistants like Claude Code to execute code on Kaggle GPUs as a native tool.
Setup in Claude Code:
-
Install with MCP support:
pip install kagglerun[mcp]
-
Add to your Claude Code MCP settings (
~/.claude/mcp_settings.json):{ "mcpServers": { "kagglerun": { "command": "python", "args": ["-m", "kagglerun.mcp_server"], "env": { "KAGGLE_JUPYTER_URL": "https://your-kaggle-url" } } } }
-
Claude can now use these tools:
execute_python- Run code on GPUget_gpu_info- Check GPU statuslist_files- Browse remote filessave_file/read_file- File operationsdownload_file- Get files locally
Example conversation with Claude:
You: "Train a ResNet18 on CIFAR-10 using the Kaggle GPU"
Claude: Uses execute_python tool to run training on Kaggle GPU
Use Cases
1. ML Model Training
result = executor.execute("""
import torch
import torch.nn as nn
from torchvision import models
model = models.resnet50(pretrained=True).cuda()
x = torch.randn(32, 3, 224, 224).cuda()
with torch.cuda.amp.autocast():
output = model(x)
print(f"Output shape: {output.shape}")
print(f"GPU Memory: {torch.cuda.memory_allocated()/1e9:.2f} GB")
""")
2. Large Language Model Inference
result = executor.execute("""
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "microsoft/phi-2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).cuda()
prompt = "The benefits of open source AI are"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
""", timeout=300)
3. Data Processing at Scale
result = executor.execute("""
import pandas as pd
import numpy as np
# Generate large dataset
df = pd.DataFrame(np.random.randn(1_000_000, 100))
df.columns = [f'feature_{i}' for i in range(100)]
# Process
result = df.describe()
print(result)
# Save
df.to_parquet('/kaggle/working/processed.parquet')
print("Saved to /kaggle/working/processed.parquet")
""")
# Download result
data = executor.download_file("processed.parquet")
4. Kaggle Competition Submissions
# Upload your solution
executor.upload_file("submission.csv")
# Or generate on Kaggle
result = executor.execute("""
import pandas as pd
# Your prediction code here
predictions = [0.1, 0.9, 0.3, ...]
submission = pd.DataFrame({
'id': range(len(predictions)),
'target': predictions
})
submission.to_csv('/kaggle/working/submission.csv', index=False)
print("Submission ready!")
""")
API Reference
KaggleExecutor
executor = KaggleExecutor(
base_url: str, # Kaggle Jupyter URL (auto-detects format)
verbose: bool = True, # Print status messages
timeout: int = 120, # Default execution timeout (seconds)
on_output: Callable = None # Callback for streaming output
)
Methods:
| Method | Description |
|---|---|
execute(code, timeout, wait_idle) |
Execute Python code |
run_file(filepath, timeout) |
Execute local .py file |
upload_file(local_path, remote_name) |
Upload file to /kaggle/working/ |
save_text(filename, content) |
Save text to remote file |
read_file(remote_path) |
Read remote text file |
download_file(remote_path) |
Download file as bytes |
list_files(path) |
List directory contents |
get_gpu_info() |
Get GPU information |
get_system_info() |
Get Python/PyTorch info |
test_connection() |
Test API connectivity |
interrupt_kernel() |
Stop current execution |
Convenience Function
from kagglerun import connect
executor = connect("https://your-kaggle-url") # Raises ConnectionError if fails
How It Works
┌──────────────────┐ REST API ┌───────────────────┐
│ Your Terminal │ ◄──────────────► │ Kaggle Jupyter │
│ or AI Agent │ │ Server │
│ │ WebSocket │ │
│ kagglerun │ ◄══════════════► │ Python Kernel │
│ │ (real-time) │ (GPU: T4/P100) │
└──────────────────┘ └───────────────────┘
- REST API - Kernel management (create, list, interrupt)
- WebSocket - Real-time code execution and output streaming
- Base64 - Binary file transfer (models, datasets)
Limitations
- Session duration: Kaggle notebooks timeout after ~12 hours of inactivity
- JWT expiration: URL tokens expire periodically; get a fresh URL if disconnected
- GPU quota: Kaggle limits GPU hours (~30h/week for free accounts)
- File persistence: Files in
/kaggle/working/persist only during session
Troubleshooting
"Connection failed"
- Ensure your Kaggle notebook is running (green "Running" status)
- Get a fresh URL if token expired (tokens expire periodically)
- Verify the notebook has GPU enabled and is not idle
"Kernel is busy"
- KaggleRun auto-interrupts busy kernels
- Use
--timeoutfor long-running code - Check if notebook is running other code
"No GPU available"
- Enable GPU in Kaggle: Settings → Accelerator → GPU
- Check GPU quota at kaggle.com/me/quota
Contributing
Contributions welcome! See CONTRIBUTING.md for guidelines.
# Development setup
git clone https://github.com/kagglerun/kagglerun
cd kagglerun
pip install -e ".[dev]"
pytest tests/ -v
License
MIT License - see LICENSE for details.
Acknowledgments
Star this repo if it helps your ML workflow!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kagglerun-0.2.2.tar.gz.
File metadata
- Download URL: kagglerun-0.2.2.tar.gz
- Upload date:
- Size: 17.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f43e212c7115449dc926c78f9bbd08cb2ce0fa769879e7a062459d55ee13a849
|
|
| MD5 |
64ff87d6cb3648d77ba079b1c079f5f0
|
|
| BLAKE2b-256 |
551326f9bf33192a2054f2ef36d37f2a05cd110a79f661e8764017868b3f8985
|
File details
Details for the file kagglerun-0.2.2-py3-none-any.whl.
File metadata
- Download URL: kagglerun-0.2.2-py3-none-any.whl
- Upload date:
- Size: 16.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d5fec729b76300b1d6337d108e85a109b13f65eb22ac5e4d01c2e44d11a127cd
|
|
| MD5 |
49c152d474241d2268f99a767ad33b16
|
|
| BLAKE2b-256 |
080bcf6705a5f9e670405df7eeef4fa8d6dee7627b67192be6e05a75f0498326
|