Skip to main content

Execute Python on Kaggle's FREE GPUs (T4/P100/H100) from your terminal. Perfect for AI agents.

Project description

KaggleRun

PyPI version Downloads License: MIT Python 3.8+

Execute Python on Kaggle's FREE GPUs from your terminal. No browser needed.

Perfect for AI coding agents like Claude Code, Cursor, and Cline.

pip install kagglerun
kagglerun --url <your-kaggle-url> "import torch; print(torch.cuda.get_device_name(0))"
# Output: NVIDIA Tesla T4 (or P100, H100 depending on availability)

Why KaggleRun?

Feature KaggleRun Kaggle Web Colab RunPod
No browser needed Yes No No No
CLI-native Yes No No Partial
AI agent ready (MCP) Yes No No No
Free GPU (T4/P100/H100) Yes Yes No No
Real-time output Yes Yes Yes Yes
Cost $0 $0 $0-10/mo $0.50+/hr

KaggleRun bridges your local terminal to Kaggle's free GPU infrastructure.


Quick Start

1. Get Your Kaggle Jupyter URL

  1. Go to kaggle.com/code and create a new notebook
  2. Enable GPU: SettingsAcceleratorGPU T4 x2 or GPU P100
  3. Click the "..." menu → "Copy VS Code Server URL"
  4. Add /proxy to the end of the URL

Your URL looks like:

https://kkb-production.jupyter-proxy.kaggle.net/k/123456/eyJ.../proxy

2. Install & Run

# Install
pip install kagglerun

# Set URL (optional - avoids --url each time)
export KAGGLE_JUPYTER_URL="https://your-url-here/proxy"

# Execute code
kagglerun "print('Hello from Kaggle GPU!')"

# Run a Python file
kagglerun train.py

# Check GPU
kagglerun --gpu-info

Features

CLI Commands

# Execute inline code
kagglerun "import torch; print(torch.cuda.is_available())"

# Execute a local Python file
kagglerun my_training_script.py

# Get GPU information
kagglerun --gpu-info

# Get system information (Python, PyTorch, CUDA)
kagglerun --system-info

# List files in /kaggle/working/
kagglerun --list-files

# Upload a file
kagglerun --upload data.csv

# Download a file
kagglerun --download results.csv --output ./local_results.csv

# Read a remote file
kagglerun --read output.txt

# Test connection
kagglerun --test

Python API

from kagglerun import KaggleExecutor

# Connect
executor = KaggleExecutor("https://your-kaggle-url/proxy")

# Execute code
result = executor.execute("""
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"Device: {torch.cuda.get_device_name(0)}")
""")
print(result['output_text'])

# Run ML training
result = executor.execute("""
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification

X, y = make_classification(n_samples=10000, n_features=20)
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1)
clf.fit(X, y)
print(f"Score: {clf.score(X, y):.4f}")
""")

# Save file to Kaggle
executor.save_text("config.json", '{"epochs": 10}')

# Download trained model
model_bytes = executor.download_file("model.pkl")
with open("local_model.pkl", "wb") as f:
    f.write(model_bytes)

MCP Server (For AI Agents)

KaggleRun includes a Model Context Protocol (MCP) server, allowing AI assistants like Claude Code to execute code on Kaggle GPUs as a native tool.

Setup in Claude Code:

  1. Install with MCP support:

    pip install kagglerun[mcp]
    
  2. Add to your Claude Code MCP settings (~/.claude/mcp_settings.json):

    {
      "mcpServers": {
        "kagglerun": {
          "command": "python",
          "args": ["-m", "kagglerun.mcp_server"],
          "env": {
            "KAGGLE_JUPYTER_URL": "https://your-kaggle-url/proxy"
          }
        }
      }
    }
    
  3. Claude can now use these tools:

    • execute_python - Run code on GPU
    • get_gpu_info - Check GPU status
    • list_files - Browse remote files
    • save_file / read_file - File operations
    • download_file - Get files locally

Example conversation with Claude:

You: "Train a ResNet18 on CIFAR-10 using the Kaggle GPU"

Claude: Uses execute_python tool to run training on Kaggle GPU


Use Cases

1. ML Model Training

result = executor.execute("""
import torch
import torch.nn as nn
from torchvision import models

model = models.resnet50(pretrained=True).cuda()
x = torch.randn(32, 3, 224, 224).cuda()

with torch.cuda.amp.autocast():
    output = model(x)

print(f"Output shape: {output.shape}")
print(f"GPU Memory: {torch.cuda.memory_allocated()/1e9:.2f} GB")
""")

2. Large Language Model Inference

result = executor.execute("""
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "microsoft/phi-2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).cuda()

prompt = "The benefits of open source AI are"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
""", timeout=300)

3. Data Processing at Scale

result = executor.execute("""
import pandas as pd
import numpy as np

# Generate large dataset
df = pd.DataFrame(np.random.randn(1_000_000, 100))
df.columns = [f'feature_{i}' for i in range(100)]

# Process
result = df.describe()
print(result)

# Save
df.to_parquet('/kaggle/working/processed.parquet')
print("Saved to /kaggle/working/processed.parquet")
""")

# Download result
data = executor.download_file("processed.parquet")

4. Kaggle Competition Submissions

# Upload your solution
executor.upload_file("submission.csv")

# Or generate on Kaggle
result = executor.execute("""
import pandas as pd

# Your prediction code here
predictions = [0.1, 0.9, 0.3, ...]

submission = pd.DataFrame({
    'id': range(len(predictions)),
    'target': predictions
})
submission.to_csv('/kaggle/working/submission.csv', index=False)
print("Submission ready!")
""")

API Reference

KaggleExecutor

executor = KaggleExecutor(
    base_url: str,           # Kaggle Jupyter proxy URL
    verbose: bool = True,    # Print status messages
    timeout: int = 120,      # Default execution timeout (seconds)
    on_output: Callable = None  # Callback for streaming output
)

Methods:

Method Description
execute(code, timeout, wait_idle) Execute Python code
run_file(filepath, timeout) Execute local .py file
upload_file(local_path, remote_name) Upload file to /kaggle/working/
save_text(filename, content) Save text to remote file
read_file(remote_path) Read remote text file
download_file(remote_path) Download file as bytes
list_files(path) List directory contents
get_gpu_info() Get GPU information
get_system_info() Get Python/PyTorch info
test_connection() Test API connectivity
interrupt_kernel() Stop current execution

Convenience Function

from kagglerun import connect

executor = connect("https://your-url/proxy")  # Raises ConnectionError if fails

How It Works

┌──────────────────┐     REST API      ┌───────────────────┐
│   Your Terminal  │ ◄──────────────► │  Kaggle Jupyter   │
│   or AI Agent    │                   │     Server        │
│                  │     WebSocket     │                   │
│  kagglerun       │ ◄══════════════► │  Python Kernel    │
│                  │   (real-time)     │  (GPU: T4/P100)   │
└──────────────────┘                   └───────────────────┘
  1. REST API - Kernel management (create, list, interrupt)
  2. WebSocket - Real-time code execution and output streaming
  3. Base64 - Binary file transfer (models, datasets)

Limitations

  • Session duration: Kaggle notebooks timeout after ~12 hours of inactivity
  • JWT expiration: URL tokens expire periodically; get a fresh URL if disconnected
  • GPU quota: Kaggle limits GPU hours (~30h/week for free accounts)
  • File persistence: Files in /kaggle/working/ persist only during session

Troubleshooting

"Connection failed"

  • Ensure your Kaggle notebook is running (green "Running" status)
  • Check that your URL ends with /proxy
  • Get a fresh URL if token expired

"Kernel is busy"

  • KaggleRun auto-interrupts busy kernels
  • Use --timeout for long-running code
  • Check if notebook is running other code

"No GPU available"

  • Enable GPU in Kaggle: Settings → Accelerator → GPU
  • Check GPU quota at kaggle.com/me/quota

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

# Development setup
git clone https://github.com/kagglerun/kagglerun
cd kagglerun
pip install -e ".[dev]"
pytest tests/ -v

License

MIT License - see LICENSE for details.


Acknowledgments


Star this repo if it helps your ML workflow!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kagglerun-0.2.1.tar.gz (17.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kagglerun-0.2.1-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file kagglerun-0.2.1.tar.gz.

File metadata

  • Download URL: kagglerun-0.2.1.tar.gz
  • Upload date:
  • Size: 17.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for kagglerun-0.2.1.tar.gz
Algorithm Hash digest
SHA256 48b5ffe384a8deda74f34cdc43dfa1fb43328b7de7ee10cb6da0f1c54539766d
MD5 90ff2cb44f370943f368ead5c2c1e95c
BLAKE2b-256 476db83cdb956f36d8c3b18fa96d67a9238054731da29518acd34e64a7a384f3

See more details on using hashes here.

File details

Details for the file kagglerun-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: kagglerun-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for kagglerun-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 15eca984a926dbeb9daa1a63088cc5d1d5b108e4a5e6d451ac996a056744ad13
MD5 0ba3a42e99baee0bae2e6f0556c7b327
BLAKE2b-256 4019eae7ba4cf0ac9f499345deb1f791c5602483aae5080614bd42df99e10ad6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page