Headless Python SDK + CLI for the Nexus skill platform (bundles nexus_core)
Project description
Nexus SDK
Headless Python library + nexus CLI that lets a customer's own client (e.g.
OPENCLAW) drive the Nexus Runner's two user-facing surfaces — chat (run
skills, including the agentic chat loop) and the skill library (browse /
install / uninstall / sync / balance) — without the Electron Runner installed.
It shares the Runner's data dir, .ocskill formats, license/offline-token flow
and shared Python env via the bundled nexus_core package (the same code the
Runner's api.py wraps in FastAPI).
Custom-skill authoring (the Runner's 工作台) is intentionally not part of
the SDK — authoring stays in the Runner GUI / the dev sdk/build/ scripts.
Layout
core_service/
nexus_runner/ Electron app — api.py (FastAPI) + electron/ + src/ (Vue);
runner.py / installer.py / bootstrap.py / license_client.py /
token_cache.py / migrate.py / skill_review.py / paths.py /
runtime_env.py are thin re-exports of nexus_core.
sdk/
pyproject.toml packages nexus_sdk + nexus_core → pip install ./core_service/sdk
README.md this file
nexus_core/ GUI-/HTTP-free core:
paths · package (read/pack .ocskill: free tar.gz / paid
OCSK-v2 AES-256-GCM / legacy ZIP + SHA256SUMS) ·
package_verify (Ed25519) · runtime_env · bootstrap
(shared <data>/python/ + on-demand skill-dep installs) ·
installer · license_client · token_cache · migrate ·
skill_review · runner (run_skill: online/offline +
license + decrypt + exec) · registry (catalog /
my-skills / balance / install-token / download) ·
agent_loop (headless chat loop + prompt-skill loop) · config
nexus_sdk/ SkillClient (high-level facade) · cli.py (`nexus`) ·
tools.py (expose skills as tool specs for your own agent loop) ·
legacy SkillLoader / SkillExecutor (back-compat)
tests/ pytest suite (no network / no Runner)
build/ dev/platform build scripts (not part of the published SDK)
Data dir: NEXUS_DATA_DIR if set (Electron passes app.getPath('userData')
= %APPDATA%\Nexus on Windows), else %APPDATA%\Nexus, else ~/.nexus —
see nexus_core.paths.
Install
pip install ./core_service/sdk # nexus_sdk + nexus_core + the `nexus` CLI
pip install "./core_service/sdk[keyring]" # + OS secret store for the API key
pip install "./core_service/sdk[legacy-zip]" # + pyzipper for old AES-ZIP .ocskill
Library
from nexus_sdk import SkillClient
c = SkillClient(api_key="sk_live_...") # or NEXUS_API_KEY env / `nexus config set`
# chat — agentic: the LLM picks/installs/runs skills via tools
res = c.chat("merge these two excels by 科目",
ask_handler=lambda prompt, schema: my_app.ask_user(prompt, schema))
print(res["reply"]); print(res["tool_calls"])
# or run a skill directly / a prompt skill
c.run("excel-merge-filter", inputs={"input_0": "/a.xlsx"}, params={"mode": "and"})
c.run_prompt("code-review", "review this diff: ...")
# skill library
c.catalog("excel") # marketplace listing
c.install("excel-merge-filter") # download + license(if paid) + decrypt + install + pip deps
c.uninstall("excel-merge-filter")
c.list_installed(); c.my_skills(); c.sync()
c.balance() # license balance / offline-token inventory
c.prefetch("bank-match", count=10)
nexus_core is usable directly too (from nexus_core.runner import run_skill_sync, etc.).
Integrating chat into your client (OPENCLAW)
c.chat(message, *, history=None, ask_handler=None, skill_hints=None, max_turns=8, locale="zh", on_delta=None, ref_table=None, llm_base_url=None, llm_api_key=None, model=None)
runs one agentic turn. The LLM sees five client-side tools — question / list_skills
/ load_skill / run_skill / read_ref — under a progressive-disclosure system
prompt; run_skill is auto-resolved via this client (installs on demand) and question
is routed to your ask_handler(prompt, schema, choices) -> str (omit it and such a turn
yields an error the LLM works around). on_delta(text) streams assistant tokens; a
ref_table keeps file paths / secrets / large outputs out of the LLM context. Returns
{"reply", "messages", "tool_calls"}; pass messages back as history.
The SDK ships no model. By default the loop calls the platform LLM proxy (metered,
billed to your API key) at <server>/api/v1/chat/completions — the same path skills hit.
NEXUS_LLM_URL (server root) overrides server_url for this. To run it against your
own OpenAI-compatible LLM instead, pass llm_base_url= (e.g. https://api.openai.com/v1)
llm_api_key=+model=(that key is used only for the LLM call, never for skill licensing).
Using Nexus skills as tools in YOUR own agent loop
If you already have an agent framework and just want the skills as callable tools (no Nexus chat loop), use the adapter:
from nexus_sdk import SkillClient, skill_tool_specs, dispatch_tool_call
c = SkillClient()
tools = skill_tool_specs(c, query="excel") # OpenAI function-tool specs
# register `tools` in your own LLM call; when the model emits e.g.
# name="nexus__excel-merge-filter", arguments={"input_0": "...", "mode": "and"}:
result = dispatch_tool_call(c, name, arguments,
max_output_bytes=8_000, # optional — cap context bloat from big results
confirm_handler=lambda slug, info, args, reason: True) # optional gate for confirm_required skills
dispatch_tool_call returns {ok: True, slug, output} on success (or
{ok, slug, output_ref, output_preview, truncated} when capped; pass a
ref_table= to keep the full output reachable). Cloud-execution-mode skills are
NOT auto-installed (manifest comes from the catalog; client.run handles cloud
routing). confirm_required skills are flagged in the return; omit
confirm_handler for back-compat (runs anyway, returns confirm_required: true).
Anthropic (Claude) format — pass flavor="anthropic" and feed results back as tool_result:
import anthropic, json
client = anthropic.Anthropic() # uses ANTHROPIC_API_KEY env
nexus = SkillClient()
tools = skill_tool_specs(nexus, query="excel", flavor="anthropic") # [{name, description, input_schema}, ...]
msg = client.messages.create(model="claude-opus-4-7", max_tokens=2048, tools=tools,
messages=[{"role": "user", "content": "merge these two excels by 科目"}])
# the model may emit a tool_use block:
for block in msg.content:
if block.type == "tool_use":
out = dispatch_tool_call(nexus, block.name, block.input, max_output_bytes=8000)
tool_result = {"type": "tool_result", "tool_use_id": block.id,
"content": json.dumps(out, ensure_ascii=False, default=str),
"is_error": not out.get("ok", False)}
# feed `tool_result` back as the next user-turn content; loop until no tool_use blocks
Not ported from the Runner's renderer (src/engine/agent-loop.ts): the fixed-rules /
TF-IDF chat layer, the quota pre-charge UI, the confirm_required "type CONFIRM" gate.
CLI
nexus chat "merge these two excels by 科目" [--interactive] [--stream] [--turns 8] [--locale zh|en]
nexus chat "..." --llm-base-url https://api.openai.com/v1 --llm-api-key sk-... --model gpt-4o # BYO LLM
nexus run <slug> --inputs '{"input_0":"/a.xlsx"}' --params '{"mode":"and"}'
nexus run-prompt <slug> "your message" [--llm-base-url ... --llm-api-key ... --model ...]
nexus install <slug> | uninstall <slug> | list | sync
nexus search [query] | balance
nexus env provision | env clear | prefetch <slug> --count 10
nexus config show | config set api_key=sk_live_... | config set server_url=...
nexus migrate export <bundle> --pass <p> | migrate import <bundle> --pass <p>
nexus version
All commands print JSON to stdout (errors → JSON on stderr, exit 1).
.ocskill format
| Format | Magic | When | Payload |
|---|---|---|---|
| plain tar.gz | 1f 8b |
free skills | skill.json, main.py/SKILL.md, requirements.txt, assets/*, SHA256SUMS |
| OCSK-v2 | OCSK(4B) + nonce(12B) + AES-256-GCM(tar.gz) |
paid skills | same payload inside; per-skill key = HMAC-SHA256(build_secret, slug), delivered by the license API |
| legacy ZIP / AES-ZIP | PK\x03\x04 |
old packages | same files; AES-ZIP needs the key as password (mcp-nexus-sdk[legacy-zip]) |
.ocskill packages carry skill source + requirements.txt only — never wheels.
Python deps are pip-installed on demand into the shared <data>/python/ env,
version-pinned project-wide via skill_packages/constraints.txt. No per-skill venvs.
Testing
Two layers — pytest for unit tests (offline, no network), and a runnable script
for end-to-end smoke against a live backend.
Unit tests (offline, ~2s)
cd core_service/sdk
pip install -e ".[test]" # installs pytest + respx + the SDK editable
pytest -q # 44 tests; no network, no Runner, no API key needed
These cover: agent_loop's tool dispatch + streaming SSE assembly, ref-table,
skill_tools (partition/validate/resolve_for_run/confirm_info), tools adapter
(specs + dispatch + confirm gate + max_output_bytes), CLI subcommands,
config/migrate round-trips, package isolation (no nexus_runner / fastapi /
electron imports leak into the SDK).
End-to-end smoke (live backend; tests/integration_smoke.py)
A standalone script — not collected by pytest — that walks every SDK surface
against a live backend and prints PASS / SKIP / FAIL per step. Uses an isolated
NEXUS_DATA_DIR under the system temp dir so it cannot pollute your real
Runner install at %APPDATA%\Nexus.
# minimal (no network, no API key) — proves import / version / env / confirm gate / CLI
python core_service/sdk/tests/integration_smoke.py
# add live catalog + balance + tool-spec generation
NEXUS_API_KEY=sk_live_... NEXUS_SERVER_URL=https://mcp-nexus.online \
python core_service/sdk/tests/integration_smoke.py
# add: actually install a free skill, run it, then uninstall
NEXUS_API_KEY=sk_live_... \
python core_service/sdk/tests/integration_smoke.py --with-install \
--skill action-item-extractor
# add: exercise the agentic chat loop through the platform proxy (consumes LLM quota)
NEXUS_API_KEY=sk_live_... \
python core_service/sdk/tests/integration_smoke.py --with-install --with-chat
# add: also exercise BYO LLM (your own OpenAI-compatible endpoint)
NEXUS_API_KEY=sk_live_... \
NEXUS_SMOKE_BYO_BASE_URL=https://api.openai.com/v1 \
NEXUS_SMOKE_BYO_API_KEY=sk-... \
NEXUS_SMOKE_BYO_MODEL=gpt-4o-mini \
python core_service/sdk/tests/integration_smoke.py --with-chat
Steps covered, in order: import + __version__, SkillClient instantiation
into an isolated data dir, shared Python env provisioning (ensure_env),
live catalog(), balance(), tool-spec generation (OpenAI + Anthropic
flavors), install + run + uninstall of a free skill, the confirm_handler
gate (offline, with a fake client), platform-proxy chat(), BYO-LLM chat(),
and python -m nexus_sdk.cli version as a CLI smoke. Each step prints its
own elapsed time; exit code 0 iff zero FAIL.
Tunable env vars: NEXUS_API_KEY · NEXUS_SERVER_URL · NEXUS_SMOKE_FREE_SLUG
(default action-item-extractor) · NEXUS_SMOKE_RUN_INPUTS / NEXUS_SMOKE_RUN_PARAMS
(JSON for the run step) · NEXUS_SMOKE_BYO_BASE_URL / NEXUS_SMOKE_BYO_API_KEY /
NEXUS_SMOKE_BYO_MODEL. CLI flags: --with-install, --with-chat,
--skill <slug>, --keep-data-dir.
Isolated install (verifies pip install ./core_service/sdk works clean)
python -m venv /tmp/mcp-nexus-sdk-test && /tmp/mcp-nexus-sdk-test/Scripts/activate # bash: source ...
pip install ./core_service/sdk
nexus version
python -c "from nexus_sdk import SkillClient, skill_tool_specs, dispatch_tool_call; print('ok')"
python core_service/sdk/tests/integration_smoke.py # works without the editable install too
The CI matrix (.github/workflows/sdk.yml) already runs this clean install
across 3 OSes × Python 3.10–3.13.
Notes
- Shared Python env: skill deps install into
<data>/python/(provisioned on demand — copies a bundled CPython if shipped, else downloads python-build-standalone), exactly like the Runner. SetNEXUS_PYTHON=<path>to use a specific interpreter. - Local Runner: if a Runner is running on
localhost:7432, the legacySkillLoaderprefers its HTTP API;SkillClientalways works locally vianexus_core. - The vestigial
nexus_sdk/setup.pyis superseded bycore_service/sdk/pyproject.toml.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_nexus_sdk-1.3.0.tar.gz.
File metadata
- Download URL: mcp_nexus_sdk-1.3.0.tar.gz
- Upload date:
- Size: 88.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eecc33c7d71d9750ab14894bf38317b015cb991683bf50e26616b8aa6ba36765
|
|
| MD5 |
e1fa6683e8a6d1c00c34e2796ff2a242
|
|
| BLAKE2b-256 |
b5afb817da3e16c2437d19eb019a3423a92737a9e38e6c39a9bb6d3ffa148430
|
File details
Details for the file mcp_nexus_sdk-1.3.0-py3-none-any.whl.
File metadata
- Download URL: mcp_nexus_sdk-1.3.0-py3-none-any.whl
- Upload date:
- Size: 90.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
345824bf49c3c5ce85fc9f4107efe1926ffd349e42535a0b8b6b80290bdeb6ed
|
|
| MD5 |
61e1b5b7c777fa347cab717bed881178
|
|
| BLAKE2b-256 |
ce23c10004c2c097e878123316ca7ec4a7d0f68c2e9f31836ee28b6a1cab711d
|