Skip to main content

Shared data models for Agent Control server and SDK

Project description

Agent Control Models

Shared data models for Agent Control server and SDK. This package contains all the Pydantic models used for API requests, responses, and data validation.

Why Shared Models?

Having a separate models package provides several benefits:

  1. Single Source of Truth: Models are defined once and used everywhere
  2. Type Safety: Ensures server and SDK use identical data structures
  3. Versioning: Models can be versioned independently
  4. Easier Maintenance: Changes propagate automatically to both server and SDK
  5. Clear Contract: API contract is explicitly defined

Common Patterns in Popular Python Packages

This design follows patterns used by popular packages:

1. Shared Models (Our Approach)

  • Google APIs (google-api-core): Separate proto/model definitions
  • Stripe (stripe-python): Models package shared across components
  • PySpark: Shared types and schemas

2. JSON/Pydantic Hybrid

  • FastAPI: Pydantic models with JSON serialization
  • Anthropic SDK: Pydantic models with .to_dict() and .from_dict()
  • OpenAI SDK: Typed models with JSON compatibility

Installation

This package is typically installed as a dependency:

# Server depends on it
cd server
uv add agent-control-models

# SDK depends on it
cd sdk
uv add agent-control-models

Usage

Agent Models

from agent_control_models import Agent, Step

# Create an agent
agent = Agent(
    agent_name="Customer Support Bot",
    agent_id="support-bot-v1",
    agent_description="Handles customer inquiries",
    agent_version="1.0.0"
)

# Create a step
step = Step(
    type="llm_inference",
    name="chat",
    input="Hello, how can I help?",
    output="I'm here to assist you!"
)

Control Models

from agent_control_models import ControlDefinition, ControlScope, ControlAction

# Define a control
control = ControlDefinition(
    name="block-toxic-input",
    description="Block toxic user messages",
    enabled=True,
    execution="server",
    scope=ControlScope(
        step_types=["llm_inference"],
        stages=["pre"]
    ),
    action=ControlAction(decision="deny")
)

Evaluation Models

from agent_control_models import EvaluationRequest, EvaluationResponse

# Create evaluation request
request = EvaluationRequest(
    agent_uuid="agent-uuid-here",
    step=Step(
        type="llm_inference",
        name="chat",
        input="User message"
    ),
    stage="pre"
)

# Evaluation response
response = EvaluationResponse(
    allowed=True,
    violated_controls=[]
)

Models

Core Models

BaseModel

Base class for all models with common utilities:

  • model_dump(): Convert to Python dictionary (Pydantic v2)
  • model_dump_json(): Convert to JSON string (Pydantic v2)
  • model_validate(): Create from dictionary (Pydantic v2)

Configuration:

  • Accepts both snake_case and camelCase fields
  • Validates on assignment
  • JSON-compatible serialization

Agent

Agent metadata and configuration.

Fields:

  • agent_name (str): Human-readable agent name
  • agent_id (str): Unique identifier
  • agent_description (Optional[str]): Agent description
  • agent_version (Optional[str]): Agent version
  • tools (Optional[List[str]]): List of available tools
  • metadata (Optional[Dict]): Additional metadata

Step

Represents a single step in agent execution.

Fields:

  • type (str): Step type (e.g., "llm_inference", "tool")
  • name (str): Step name
  • input (Optional[Any]): Step input data
  • output (Optional[Any]): Step output data
  • context (Optional[Dict]): Additional context

ControlDefinition

Complete control specification.

Fields:

  • name (str): Control name
  • description (Optional[str]): Control description
  • enabled (bool): Whether control is active
  • execution (str): Execution mode ("server" or "local")
  • scope (ControlScope): When to apply the control
  • selector (ControlSelector): What data to evaluate
  • evaluator (EvaluatorConfig): How to evaluate
  • action (ControlAction): What to do on match

EvaluationRequest

Request for evaluating controls.

Fields:

  • agent_uuid (str): Agent identifier
  • step (Step): Step to evaluate
  • stage (str): Evaluation stage ("pre" or "post")

EvaluationResponse

Response from control evaluation.

Fields:

  • allowed (bool): Whether the step is allowed
  • violated_controls (List[str]): Names of violated controls
  • evaluation_results (Optional[List]): Detailed evaluation results

HealthResponse

Health check response.

Fields:

  • status (str): Health status ("healthy")
  • version (str): Server version

Design Patterns

1. Pydantic v2

All models use Pydantic v2 for validation and serialization:

from agent_control_models import Agent

# Create with validation
agent = Agent(
    agent_name="My Agent",
    agent_id="my-agent-v1"
)

# Serialize to dict
agent_dict = agent.model_dump()

# Serialize to JSON
agent_json = agent.model_dump_json()

# Deserialize from dict
agent_copy = Agent.model_validate(agent_dict)

2. Type Safety

Models provide strong typing throughout the stack:

from agent_control_models import Step, EvaluationRequest

# Type-safe step creation
step = Step(
    type="llm_inference",
    name="chat",
    input="Hello"
)

# Type-safe evaluation request
request = EvaluationRequest(
    agent_uuid="uuid-here",
    step=step,
    stage="pre"
)

3. Extensibility

Models support additional metadata for extensibility:

from agent_control_models import Agent

# Add custom metadata
agent = Agent(
    agent_name="Support Bot",
    agent_id="support-v1",
    metadata={
        "team": "customer-success",
        "environment": "production",
        "custom_field": "value"
    }
)

Development

Adding New Models

  1. Create a new file in src/agent_control_models/
  2. Define models extending BaseModel
  3. Export in __init__.py
  4. Update both server and SDK to use the new models

Example:

# src/agent_control_models/auth.py
from .base import BaseModel

class AuthRequest(BaseModel):
    api_key: str
    
# src/agent_control_models/__init__.py
from .auth import AuthRequest

__all__ = [..., "AuthRequest"]

Testing

cd models
uv run pytest

Best Practices

  1. Always extend BaseModel: Get free JSON/dict conversion
  2. Use Field for validation: Add constraints and descriptions
  3. Keep models simple: No business logic, just data
  4. Version carefully: Model changes affect both server and SDK
  5. Document fields: Use Field's description parameter
  6. Use Optional appropriately: Mark optional fields clearly

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_control_models-3.0.0.tar.gz (25.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_control_models-3.0.0-py3-none-any.whl (30.7 kB view details)

Uploaded Python 3

File details

Details for the file agent_control_models-3.0.0.tar.gz.

File metadata

  • Download URL: agent_control_models-3.0.0.tar.gz
  • Upload date:
  • Size: 25.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agent_control_models-3.0.0.tar.gz
Algorithm Hash digest
SHA256 a83d543df3947baa7fe1b938821b659a30851e88ec090e2cebfb6e025ff756db
MD5 6d0c8881c11036ca37016d978f1fae47
BLAKE2b-256 6b1ece39134ae76398c3e3eabb6adc58cb4b1390a41445d6f2a2db63cbe3cdf4

See more details on using hashes here.

File details

Details for the file agent_control_models-3.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_control_models-3.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 146071f7172f65eb7aef3d275d1c56a32c55474a2de30c119cbb0a7a9e9dc450
MD5 f8c023680f17f8b1226beb6122221203
BLAKE2b-256 5f3ba4b312cb03a966693d09f00c29ca760d7bcd82765c8e70e0c4d65d75c914

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page