SyntaxMatrix Agent Development Kit for calling deployed SyntaxMatrix Agent Services.
Project description
smxADK
smxADK stands for SyntaxMatrix Agent Development Kit.
It is a lightweight Python SDK for calling a deployed SyntaxMatrix Agent Service from any Python project.
Installation
For local development:
pip install -e .
Basic Usage
from smxadk import SMXAgentClient
client = SMXAgentClient(
base_url="https://your-agent-service-url"
)
response = client.chat(
message="Explain RAG in two short sentences.",
mode="expert",
)
print(response.answer)
print(response.usage.total_tokens)
Self-Hosting Model
smxADK is designed for self-hosted SyntaxMatrix deployments.
The client organisation owns and operates its own backend infrastructure:
Client Application
↓
smxADK
↓
Client-owned Agent Service
↓
Client-owned LiteLLM Proxy
↓
Client-owned Ollama / vLLM backends
This means:
- the client owns the backend URLs;
- the client pays for their own GPUs, CPUs, storage, and networking;
- the client controls their own data boundary;
- SyntaxMatrix provides the SDK, deployment tooling, templates, and framework.
smxADK should not be hardcoded to use SyntaxMatrix-owned infrastructure.
The caller must provide the deployed Agent Service URL:
from smxadk import SMXAgentClient
client = SMXAgentClient(
base_url="https://client-owned-agent-service-url"
)
Deployment Configuration
Client organisations define their available model routes in:
smx_deployment.yaml
Example:
agent_service:
supported_modes:
- light
- medium
- heavy
- expert
- expert-heavy
models:
light:
provider: ollama
model: your-light-model-name
api_base: https://client-light-ollama-service-url
medium:
provider: ollama
model: your-medium-model-name
api_base: https://client-medium-ollama-service-url
heavy:
provider: ollama
model: your-heavy-model-name
api_base: https://client-heavy-ollama-service-url
expert:
provider: openai_compatible
model: your-expert-model-name
api_base: https://client-expert-vllm-service-url/v1
expert-heavy:
provider: openai_compatible
model: your-expert-heavy-model-name
api_base: https://client-expert-heavy-vllm-service-url/v1
model: your-light-model-name model: your-medium-model-name model: your-expert-model-name
The client does not need to deploy every possible route.
They only declare the routes they actually have.
Generating Deployment Files
Run:
smxadk generate
This generates:
../llm-proxy/config.yaml
../agent-service/supported_modes.txt
The generated LiteLLM config maps each route to the client-owned backend URL.
The generated supported_modes.txt tells the Agent Service which modes it should accept.
Runtime Route Discovery
After deployment, applications can discover available routes:
from smxadk import SMXAgentClient
client = SMXAgentClient(
base_url="https://client-owned-agent-service-url"
)
print(client.supported_modes())
Example output:
["light", "medium", "expert"]
If a caller requests a mode that is not supported, the Agent Service rejects it cleanly before calling LiteLLM.
Separation of Responsibilities
smx_deployment.yaml → client backend routes and model catalogue
LiteLLM Proxy → backend URL routing
Agent Service → request validation and orchestration
smxADK → client SDK for application developers
The ADK only talks to the Agent Service.
It does not need to know the individual model backend URLs.
Health Check
from smxadk import SMXAgentClient
client = SMXAgentClient(
base_url="https://your-agent-service-url"
)
health = client.health()
print(health.status)
print(health.supported_modes)
Streaming Chat
from smxadk import SMXAgentClient
client = SMXAgentClient(
base_url="https://your-agent-service-url"
)
for chunk in client.stream_chat(
message="Explain RAG in two short sentences.",
mode="expert",
):
print(chunk, end="", flush=True)
Supported Agent Service Endpoints
smxADK currently supports:
GET /health
POST /chat
POST /chat/stream
Explicit Model Routing
The caller must explicitly choose the model route.
Example:
response = client.chat(
message="Write a short summary.",
mode="light",
)
Available modes depend on the deployed Agent Service.
Current example modes:
light
medium
heavy
expert
expert-heavy
Response Shape
client.chat() returns a ChatResponse object:
response.answer
response.mode
response.usage.prompt_tokens
response.usage.completion_tokens
response.usage.total_tokens
Project Structure
smx-adk/
├── pyproject.toml
├── README.md
└── smxadk/
├── __init__.py
├── client.py
└── schemas.py
Design Goal
smxADK is designed to make deployed SyntaxMatrix Agent Services easy to plug into:
- FastAPI apps
- Dash apps
- Flask apps
- notebooks
- internal tools
- enterprise AI assistants
- SyntaxMatrix-based applications
Licence
Proprietary / SyntaxMatrix.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file smxadk-0.1.1.tar.gz.
File metadata
- Download URL: smxadk-0.1.1.tar.gz
- Upload date:
- Size: 8.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6548b996512ad5bbf2854c2f40ba2892054fb1bdb76d3de0caf290de79eef47f
|
|
| MD5 |
4f8b634f7c0485d8ee6ad74b0b6adab5
|
|
| BLAKE2b-256 |
a661dec02ae2e027af8c7bba453bdf743839e29858d579e7a39c9bb61c21a613
|
File details
Details for the file smxadk-0.1.1-py3-none-any.whl.
File metadata
- Download URL: smxadk-0.1.1-py3-none-any.whl
- Upload date:
- Size: 8.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b52384a0ef2876a9828eb681432bbd798d570e5e872917672465208d84d86b25
|
|
| MD5 |
1dc395b68e8e171087141ffcbd79c786
|
|
| BLAKE2b-256 |
71645b38d73b8cd90b7785101989dcd8937449d5ea3378ff75aea829a9e22721
|