Skip to main content

SyntaxMatrix Agent Development Kit for calling deployed SyntaxMatrix Agent Services.

Project description

smxADK

smxADK stands for SyntaxMatrix Agent Development Kit.

It is a lightweight Python SDK for calling a deployed SyntaxMatrix Agent Service from any Python project.


Installation

For local development:

pip install -e .

Basic Usage

from smxadk import SMXAgentClient

client = SMXAgentClient(
    base_url="https://your-agent-service-url"
)

response = client.chat(
    message="Explain RAG in two short sentences.",
    mode="expert",
)

print(response.answer)
print(response.usage.total_tokens)


Self-Hosting Model

smxADK is designed for self-hosted SyntaxMatrix deployments.

The client organisation owns and operates its own backend infrastructure:

Client Application
      ↓
smxADK
      ↓
Client-owned Agent Service
      ↓
Client-owned LiteLLM Proxy
      ↓
Client-owned Ollama / vLLM backends

This means:

  • the client owns the backend URLs;
  • the client pays for their own GPUs, CPUs, storage, and networking;
  • the client controls their own data boundary;
  • SyntaxMatrix provides the SDK, deployment tooling, templates, and framework.

smxADK should not be hardcoded to use SyntaxMatrix-owned infrastructure.
The caller must provide the deployed Agent Service URL:

from smxadk import SMXAgentClient

client = SMXAgentClient(
    base_url="https://client-owned-agent-service-url"
)

Deployment Configuration

Client organisations define their available model routes in:

smx_deployment.yaml

Example:

agent_service:
  supported_modes:
    - light
    - medium
    - heavy
    - expert
    - expert-heavy

models:
  light:
    provider: ollama
    model: your-light-model-name
    api_base: https://client-light-ollama-service-url

  medium:
    provider: ollama
    model: your-medium-model-name
    api_base: https://client-medium-ollama-service-url

  heavy:
    provider: ollama
    model: your-heavy-model-name
    api_base: https://client-heavy-ollama-service-url

  expert:
    provider: openai_compatible
    model: your-expert-model-name
    api_base: https://client-expert-vllm-service-url/v1    

  expert-heavy:
    provider: openai_compatible
    model: your-expert-heavy-model-name
    api_base: https://client-expert-heavy-vllm-service-url/v1

model: your-light-model-name model: your-medium-model-name model: your-expert-model-name

The client does not need to deploy every possible route.
They only declare the routes they actually have.


Generating Deployment Files

Run:

smxadk generate

This generates:

../llm-proxy/config.yaml
../agent-service/supported_modes.txt

The generated LiteLLM config maps each route to the client-owned backend URL.

The generated supported_modes.txt tells the Agent Service which modes it should accept.


Runtime Route Discovery

After deployment, applications can discover available routes:

from smxadk import SMXAgentClient

client = SMXAgentClient(
    base_url="https://client-owned-agent-service-url"
)

print(client.supported_modes())

Example output:

["light", "medium", "expert"]

If a caller requests a mode that is not supported, the Agent Service rejects it cleanly before calling LiteLLM.


Separation of Responsibilities

smx_deployment.yaml      → client backend routes and model catalogue
LiteLLM Proxy            → backend URL routing
Agent Service            → request validation and orchestration
smxADK                   → client SDK for application developers

The ADK only talks to the Agent Service.
It does not need to know the individual model backend URLs.


Health Check

from smxadk import SMXAgentClient

client = SMXAgentClient(
    base_url="https://your-agent-service-url"
)

health = client.health()

print(health.status)
print(health.supported_modes)

Streaming Chat

from smxadk import SMXAgentClient

client = SMXAgentClient(
    base_url="https://your-agent-service-url"
)

for chunk in client.stream_chat(
    message="Explain RAG in two short sentences.",
    mode="expert",
):
    print(chunk, end="", flush=True)

Supported Agent Service Endpoints

smxADK currently supports:

GET  /health
POST /chat
POST /chat/stream

Explicit Model Routing

The caller must explicitly choose the model route.

Example:

response = client.chat(
    message="Write a short summary.",
    mode="light",
)

Available modes depend on the deployed Agent Service.

Current example modes:

light
medium
heavy
expert
expert-heavy

Response Shape

client.chat() returns a ChatResponse object:

response.answer
response.mode
response.usage.prompt_tokens
response.usage.completion_tokens
response.usage.total_tokens

Project Structure

smx-adk/
├── pyproject.toml
├── README.md
└── smxadk/
    ├── __init__.py
    ├── client.py
    └── schemas.py

Design Goal

smxADK is designed to make deployed SyntaxMatrix Agent Services easy to plug into:

  • FastAPI apps
  • Dash apps
  • Flask apps
  • notebooks
  • internal tools
  • enterprise AI assistants
  • SyntaxMatrix-based applications

Licence

Proprietary / SyntaxMatrix.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

smxadk-0.1.0.tar.gz (8.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

smxadk-0.1.0-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file smxadk-0.1.0.tar.gz.

File metadata

  • Download URL: smxadk-0.1.0.tar.gz
  • Upload date:
  • Size: 8.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.1

File hashes

Hashes for smxadk-0.1.0.tar.gz
Algorithm Hash digest
SHA256 217f56766eb9bb086311434ef114c3a367c054d96a672ff9d3f66e649c139027
MD5 b9f5ed75d74d93894604efb327cadd03
BLAKE2b-256 568325588b3d5814e2a1a62e5aba30ac6aa770baf4b145ff8a46b32c0b225098

See more details on using hashes here.

File details

Details for the file smxadk-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: smxadk-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 8.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.1

File hashes

Hashes for smxadk-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7c43c945fdc2add2a4db06e3d1c04581890ee0e4e90ae83ece17f67749934b51
MD5 36b99bd33a4e60cd82885206c8e4d285
BLAKE2b-256 b5b52fa4b2af7583b8cfbf1938506ae72d7820c19f314a4c3f09b56267fb5d95

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page