Local development server for Kipu Quantum Hub services, imitates the platform runtime.
Project description
qhub-serve
Local development server for Kipu Quantum Hub services.
Runs a Kipu Quantum Hub workspace locally and exposes it through an HTTP server that imitates the platform runtime — same async job lifecycle, same API endpoints, same run(...) invocation convention.
qhub-serve runs inside the workspace's own uv venv, so all the workspace's dependencies are naturally available.
How the platform runtime works
The Kipu Quantum Hub platform invokes your service by calling a run function in src/program.py. The function parameters are derived directly from the request body — each top-level key maps to a parameter by name:
def run(values: list[float], files: DataPool) -> ResultModel:
...
qhub-serve does exactly the same thing locally, wrapping it in an HTTP server that exposes the full job lifecycle API (submit → poll status → fetch result).
Usage
Via the qhub CLI (recommended)
qhubctl serve
The qhub serve command adds qhub-serve as a dev dependency to your workspace (if not already present) and runs it via
uv run, so it always executes inside your workspace's own venv where all your dependencies are installed.
Without the CLI
qhub-serve is a standard Python package and can be used independently of the qhub CLI.
The key requirement is that it runs inside your workspace's own venv so that all of your service's dependencies are available.
1. Add qhub-serve as a dev dependency to your workspace:
cd /path/to/your/workspace
uv add --dev qhub-serve
2. Run it from your workspace directory:
uv run automatically activates the workspace venv before executing the command, so your service dependencies are always on the path:
uv run qhub-serve
This is equivalent to running qhub serve without the CLI wrapper.
The --workspace flag defaults to the current directory, so the above is the same as:
uv run qhub-serve --workspace .
To target a workspace located elsewhere:
uv run --project /path/to/your/workspace qhub-serve --workspace /path/to/your/workspace
Using a pre-activated venv directly:
If your workspace venv is already activated (e.g. source .venv/bin/activate), you can invoke the script directly:
qhub-serve
# or
python -m qhub_serve
Options
qhub-serve [--workspace DIR] [--host HOST] [--port PORT] [--entrypoint MODULE:FUNCTION]
--workspace DIR Path to the workspace directory (default: current working directory)
--host HOST Bind host (default: 127.0.0.1)
--port PORT Bind port (default: 8081)
--entrypoint MOD:FN Entrypoint in module:function notation, relative to the workspace
(default: src.program:run)
The server watches the workspace directory and reloads automatically on any file change.
HTTP API
The server exposes the same API the Kipu Quantum Hub platform uses:
| Method | Path | Description |
|---|---|---|
GET |
/ |
List all service executions |
POST |
/ |
Submit a service execution — returns immediately with status: PENDING |
GET |
/{id} |
Poll execution status (PENDING / RUNNING / SUCCEEDED / FAILED / CANCELLED) |
GET |
/{id}/result |
Fetch the result once the job has succeeded |
GET |
/{id}/result/{file} |
Download a specific result file |
GET |
/{id}/log |
Get log entries for a service execution |
PUT |
/{id}/cancel |
Cancel a running execution |
Example — submit a job:
curl -X POST http://127.0.0.1:8081/ \
-H "Content-Type: application/json" \
-d '{"values": [1, 2, 3]}'
# → {"id": "abc-123", "status": "PENDING", ...}
Poll until done, then fetch the result:
curl http://127.0.0.1:8081/abc-123
# → {"id": "abc-123", "status": "SUCCEEDED", ...}
curl http://127.0.0.1:8081/abc-123/result
# → {"sum": 6.0, "_links": {...}, "_embedded": {...}}
Interactive API docs are available at http://127.0.0.1:8081/docs while the server is running.
Result files
When a service execution completes, every file written to the output directory by your run function is exposed through the API as a HAL link in _links and can be downloaded individually.
How it works
qhub-serve creates a dedicated output directory for each execution at <workspace>/out/<execution-id>/.
The platform runtime injects this path via the OUTPUT_DIRECTORY environment variable, which qhub.commons.runtime.output writes to automatically.
After the execution finishes, GET /{id}/result returns:
{
"_links": {
"status": { "href": "/{id}" },
"output.json": { "href": "/{id}/result/output.json" },
"hello.txt": { "href": "/{id}/result/hello.txt" }
},
"_embedded": {
"status": { "id": "...", "status": "SUCCEEDED", ... }
},
"sum": 145.2
}
- Every file present in
out/<execution-id>/appears as a named entry in_links. - If an
output.jsonfile exists its top-level keys are merged into the response body directly (e.g.sum,elapsed_timeabove). - Individual files can be downloaded via
GET /{id}/result/{filename}.
Example — download a result file:
curl http://127.0.0.1:8081/abc-123/result/hello.txt
Data pools
Data pools are read-only directories of files that are mounted into the service at runtime.
They are exposed to your run function as DataPool parameters — the parameter name determines which subdirectory is mounted.
Directory layout
Place your data pool files under <workspace>/datapool/<parameter-name>/:
workspace/
datapool/
files/ # → injected as the `files: DataPool` parameter
measurements.csv
offsets.csv
qhub-serve sets DATAPOOL_DIRECTORY to <workspace>/datapool automatically at startup, so no additional configuration is needed.
Accessing data pool files in your service
import csv
from qhub.commons.datapool import DataPool
def run(values: list[float], files: DataPool) -> ...:
for file_name in files.list_files():
with files.open(file_name) as f:
reader = csv.DictReader(f)
for row in reader:
...
DataPool.list_files() returns a {filename: absolute_path} dict. DataPool.open(name) opens a file by name and supports use as a context manager.
Testing services that use data pools locally
- Create the data pool directory inside your workspace and add sample files:
workspace/
datapool/
files/
sample_a.csv
sample_b.csv
- Start the server — data pool discovery is automatic:
uv run qhub-serve --workspace workspace
- Submit a job that exercises the data pool:
curl -X POST http://127.0.0.1:8081/ \
-H "Content-Type: application/json" \
-d '{"values": [1.0, 2.0]}'
The files DataPool will be populated from workspace/datapool/files/ and available to your run function just as it would be on the platform.
CSV format
The sample workspace expects a single value column per file:
value
1.5
2.3
0.7
Multiple columns are supported — use csv.DictReader and access the relevant column by name.
Development
Setup
Clone the repository and install all dependencies (including dev dependencies) into a local venv:
uv venv
uv sync
Run the server against the built-in sample workspace
The repository ships a minimal sample workspace under workspace/ that sums a list of floats.
It is the canonical target for local development and smoke-testing.
Run the server directly from the repo root using the local source:
uv run qhub-serve --workspace workspace
Then in another terminal, submit a test job:
curl -X POST http://127.0.0.1:8081/ \
-H "Content-Type: application/json" \
-d '{"values": [1, 5.2, 20, 7, 9.4]}'
Alternatively, drive uvicorn directly (useful when iterating on the server itself, as it gives more control over reload behavior):
ENTRYPOINT=src.program:run PYTHONPATH=workspace uv run uvicorn qhub_serve.app:app --reload
Run the tests
uv run pytest
Build the package
uv build
Publishing
Releases are managed via semantic-release (conventional commits). On merge to the default branch, the CI pipeline:
- Runs tests
- Determines the next version from commit messages
- Publishes the new version to PyPI via
uv publish
The project must be set up as a Trusted Publisher on PyPI for this to work.
qhubctl CLI integration
The qhubctl CLI drives qhub-serve via uv, ensuring it always runs inside the user's workspace venv where all workspace dependencies are available.
The qhubctl serve command does the following:
# 1. Add qhub-serve as a dev dependency (idempotent)
uv add --dev qhub-serve --project <workspace-dir>
# 2. Run it inside the workspace venv
uv run --project <workspace-dir> qhub-serve --workspace <workspace-dir>
The only prerequisite is that uv is installed and the workspace is a valid uv project, both of which are guaranteed by the qhubctl CLI's workspace bootstrap flow.
License
Apache-2.0 | Copyright 2026-present Kipu Quantum GmbH
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qhub_serve-1.4.1.tar.gz.
File metadata
- Download URL: qhub_serve-1.4.1.tar.gz
- Upload date:
- Size: 44.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fef5c274d74943b891390ad90cd6492d6696835fcc001ae0a3a86da4c4bf904a
|
|
| MD5 |
7189c548effb4a7c43b6306355985c8d
|
|
| BLAKE2b-256 |
6983f49ef1b9a2c87544a1a96122df8c08a03bd2de2576a156ff4555dbd0459f
|
File details
Details for the file qhub_serve-1.4.1-py3-none-any.whl.
File metadata
- Download URL: qhub_serve-1.4.1-py3-none-any.whl
- Upload date:
- Size: 21.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c18632d8e426f3d94fb75267bfd80cdb87acf1160e16c529b51664341f437ac5
|
|
| MD5 |
731df52853e5b1ceeb1c91a04df1a76c
|
|
| BLAKE2b-256 |
95e67b9472efef21371919268f1f249d2be146cbb27e946236b943d6b3981905
|