Execute Python .py files on remote Jupyter servers
Project description
pyrun-jupyter
Execute Python projects and .py files on remote Jupyter servers.
Installation
pip install pyrun-jupyter
Quick Start
from pyrun_jupyter import JupyterRunner
# Kaggle-style workflow: sync a local project, run an entrypoint, download artifacts
with JupyterRunner("http://localhost:8888", token="your_token") as runner:
result = runner.run_project(
"./my_project",
"train.py",
artifact_paths=["outputs/*.pth", "outputs/metrics.json"],
local_artifact_dir="./artifacts",
)
print(result.stdout)
print(result.data["artifacts"])
Features
- 🐍 Execute whole local Python projects on remote Jupyter kernels
- 📁 Sync a project directory so imports across files keep working
- 🎯 Run a chosen project entrypoint from the synced remote workspace
- 📤 Pass parameters to your scripts
- 📦 Download training artifacts such as models, plots, and metrics
- 📥 Capture stdout, stderr, and rich outputs
- 🔄 Kernel management (start, stop, restart)
- 🔌 Connect to existing kernels
- ⚡ Context manager support for automatic cleanup
Usage Examples
Project Workflow for Kaggle and Other Remote Kernels
This is the recommended workflow for normal .py projects. run_project() uploads a local
directory, runs the selected entrypoint inside that synced directory, keeps relative imports
working, and downloads requested artifacts afterward.
from pyrun_jupyter import JupyterRunner
with JupyterRunner("http://kaggle-jupyter-server", token="xxx") as runner:
result = runner.run_project(
"./trainer",
"train.py",
params={
"epochs": 10,
"learning_rate": 0.001,
},
artifact_paths=[
"outputs/*.pth",
"outputs/metrics.json",
],
local_artifact_dir="./results",
)
print(result.stdout)
print(result.data["artifacts"])
Example project layout:
trainer/
├── train.py
├── model.py
├── utils/
│ └── data.py
└── outputs/
If train.py imports model.py or modules under utils/, those imports continue to work
after the project directory is synced to the remote kernel.
Basic Usage
from pyrun_jupyter import JupyterRunner
runner = JupyterRunner("http://jupyter-server:8888", token="xxx")
# Execute code
result = runner.run("x = 42; print(f'The answer is {x}')")
print(result.stdout) # The answer is 42
# Clean up
runner.stop_kernel()
Using Context Manager (Recommended)
from pyrun_jupyter import JupyterRunner
with JupyterRunner("http://localhost:8888", token="xxx") as runner:
result = runner.run_file("my_script.py")
print(result.stdout)
# Kernel automatically stopped
Passing Parameters to Scripts
from pyrun_jupyter import JupyterRunner
with JupyterRunner("http://localhost:8888", token="xxx") as runner:
# Parameters are injected as variables in your script
result = runner.run_file(
"train_model.py",
params={
"learning_rate": 0.001,
"epochs": 100,
"batch_size": 32,
}
)
print(result.stdout)
Your train_model.py can use these variables directly:
# train_model.py
print(f"Training with lr={learning_rate}, epochs={epochs}")
# ... your training code
You can also pass parameters to a project entrypoint:
with JupyterRunner("http://localhost:8888", token="xxx") as runner:
result = runner.run_project(
"./trainer",
"train.py",
params={"epochs": 50, "batch_size": 32},
)
Handling Errors
from pyrun_jupyter import JupyterRunner, ExecutionError
runner = JupyterRunner("http://localhost:8888", token="xxx")
result = runner.run("1/0")
if result.has_error:
print(f"Error: {result.error_name}: {result.error}")
# Error: ZeroDivisionError: division by zero
Connecting to Existing Kernel
runner = JupyterRunner("http://localhost:8888", token="xxx", auto_start_kernel=False)
# List available kernels
kernels = runner.list_kernels()
print(kernels)
# Connect to a specific kernel
runner.connect_to_kernel("existing-kernel-id")
result = runner.run("print('Using existing kernel!')")
Managing Kernels
runner = JupyterRunner("http://localhost:8888", token="xxx")
# Start a specific kernel type
runner.start_kernel("python3")
# Restart kernel (clears state)
runner.restart_kernel()
# Stop kernel when done
runner.stop_kernel()
CLI
Project-Oriented Command
pyrun-jupyter run-project ./trainer train.py \
--url http://localhost:8888 \
--token xxx \
--artifact "outputs/*.pth" \
--artifact "outputs/metrics.json" \
--artifact-dir ./results \
--exclude .git \
--exclude __pycache__
Low-Level Commands
pyrun-jupyter run-file script.py --url http://localhost:8888 --token xxx
pyrun-jupyter run "print('hello')" --url http://localhost:8888 --token xxx
ExecutionResult
The run(), run_file(), and run_project() methods return an ExecutionResult object:
| Attribute | Type | Description |
|---|---|---|
stdout |
str | Standard output |
stderr |
str | Standard error |
success |
bool | Whether execution succeeded |
error |
str | Error message (if failed) |
error_name |
str | Exception type (e.g., 'ValueError') |
error_traceback |
list | Full traceback |
data |
dict | Rich output (text/plain, text/html, etc.) |
execution_count |
int | Jupyter cell execution count |
When using run_project(), downloaded local artifact paths are stored in
result.data["artifacts"].
File Transfer
Upload/Download via Contents API
with JupyterRunner("http://localhost:8888", token="xxx") as runner:
# Upload single file
runner.upload_file("local_data.csv", "data/input.csv")
# Upload entire directory
runner.upload_directory(
"./my_project",
remote_dir="project",
pattern="**/*.py",
exclude_patterns=["__pycache__", "*.pyc"]
)
# Download file
runner.download_file("output/model.pt", "./local/model.pt")
# Download multiple files
runner.download_files(
["output/model.pt", "output/metrics.json"],
local_dir="./results"
)
Upload/Download via Kernel (for Kaggle, etc.)
Some environments (like Kaggle) don't support the Contents API. Use kernel-based methods:
with JupyterRunner(kaggle_url) as runner:
result = runner.run_project(
"./my_project",
"train.py",
artifact_paths=["outputs/*.pth", "outputs/*.png"],
local_artifact_dir="./results",
)
For advanced workflows, the lower-level helpers are still available:
with JupyterRunner(kaggle_url) as runner:
runner.upload_directory_via_kernel("./my_project", remote_dir="project")
runner.run("import os; os.chdir('project'); exec(open('train.py').read())")
runner.download_kernel_files(
["outputs/model.pth", "outputs/results.png"],
local_dir="./results",
working_dir="project",
flatten=False,
)
Notes for Kaggle Workflows
run_project()is the recommended API for Kaggle-oriented development with normal.pyfiles.- The package syncs project files, not Python package dependencies. Third-party libraries are expected to already be installed in the remote environment.
- Each
run_project()call prepares a fresh remote project directory by default, so stale files from previous runs do not affect the next execution. - Artifact paths can be exact file paths or glob patterns relative to the remote project root.
Configuration
| Parameter | Default | Description |
|---|---|---|
url |
(required) | Jupyter server URL |
token |
None | Authentication token |
kernel_name |
"python3" | Kernel specification to use |
auto_start_kernel |
True | Start kernel automatically on first run |
reuse_kernel |
True | Reuse existing kernel if available |
Getting Your Jupyter Token
From Jupyter Notebook/Lab
When you start Jupyter, it displays a URL with the token:
http://localhost:8888/?token=abc123...
Generate a permanent token
jupyter server --generate-config
# Edit ~/.jupyter/jupyter_server_config.py
# Set: c.ServerApp.token = 'your-secret-token'
Requirements
- Python >= 3.8
- A running Jupyter server (Notebook, Lab, or Hub)
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyrun_jupyter-0.5.0.tar.gz.
File metadata
- Download URL: pyrun_jupyter-0.5.0.tar.gz
- Upload date:
- Size: 32.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
684c9b7e372d096df9abde6255df7b64ac96e3cebe70c91ec436ea256418a81b
|
|
| MD5 |
603234f9b8b50e51de63d7f080286460
|
|
| BLAKE2b-256 |
24f88cf44b2786d4c9d102862011d6f0e829f905a238bc99698551d87873f227
|
Provenance
The following attestation bundles were made for pyrun_jupyter-0.5.0.tar.gz:
Publisher:
publish.yml on petitoff/pyrun-jupyter
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pyrun_jupyter-0.5.0.tar.gz -
Subject digest:
684c9b7e372d096df9abde6255df7b64ac96e3cebe70c91ec436ea256418a81b - Sigstore transparency entry: 1061142302
- Sigstore integration time:
-
Permalink:
petitoff/pyrun-jupyter@98861894edbc08f1451af9abda886154d57adc58 -
Branch / Tag:
refs/tags/v0.5.0 - Owner: https://github.com/petitoff
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@98861894edbc08f1451af9abda886154d57adc58 -
Trigger Event:
release
-
Statement type:
File details
Details for the file pyrun_jupyter-0.5.0-py3-none-any.whl.
File metadata
- Download URL: pyrun_jupyter-0.5.0-py3-none-any.whl
- Upload date:
- Size: 29.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4342fef01701fd7c533b0bf85b93a70e2c544ca5235f6251e41fddaa17c16075
|
|
| MD5 |
d2f869460880a1aefd1a9fc4631591c3
|
|
| BLAKE2b-256 |
9a88c8db734a1c49a6cec87589d94cf503e461cf40ebf6237b4d3ff236895c2d
|
Provenance
The following attestation bundles were made for pyrun_jupyter-0.5.0-py3-none-any.whl:
Publisher:
publish.yml on petitoff/pyrun-jupyter
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pyrun_jupyter-0.5.0-py3-none-any.whl -
Subject digest:
4342fef01701fd7c533b0bf85b93a70e2c544ca5235f6251e41fddaa17c16075 - Sigstore transparency entry: 1061142337
- Sigstore integration time:
-
Permalink:
petitoff/pyrun-jupyter@98861894edbc08f1451af9abda886154d57adc58 -
Branch / Tag:
refs/tags/v0.5.0 - Owner: https://github.com/petitoff
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@98861894edbc08f1451af9abda886154d57adc58 -
Trigger Event:
release
-
Statement type: