Skip to main content

Shared data models for Agent Control server and SDK

Project description

Agent Control Models

Shared data models for Agent Control server and SDK. This package contains all the Pydantic models used for API requests, responses, and data validation.

Why Shared Models?

Having a separate models package provides several benefits:

  1. Single Source of Truth: Models are defined once and used everywhere
  2. Type Safety: Ensures server and SDK use identical data structures
  3. Versioning: Models can be versioned independently
  4. Easier Maintenance: Changes propagate automatically to both server and SDK
  5. Clear Contract: API contract is explicitly defined

Common Patterns in Popular Python Packages

This design follows patterns used by popular packages:

1. Shared Models (Our Approach)

  • Google APIs (google-api-core): Separate proto/model definitions
  • Stripe (stripe-python): Models package shared across components
  • PySpark: Shared types and schemas

2. JSON/Pydantic Hybrid

  • FastAPI: Pydantic models with JSON serialization
  • Anthropic SDK: Pydantic models with .to_dict() and .from_dict()
  • OpenAI SDK: Typed models with JSON compatibility

Installation

This package is typically installed as a dependency:

# Server depends on it
cd server
uv add agent-control-models

# SDK depends on it
cd sdk
uv add agent-control-models

Usage

Agent Models

from agent_control_models import Agent, Step

# Create an agent
agent = Agent(
    agent_name="Customer Support Bot",
    agent_name="550e8400-e29b-41d4-a716-446655440000",
    agent_description="Handles customer inquiries",
    agent_version="1.0.0"
)

# Create a step
step = Step(
    type="llm_inference",
    name="chat",
    input="Hello, how can I help?",
    output="I'm here to assist you!"
)

Control Models

from agent_control_models import ControlDefinition, ControlScope, ControlAction

# Define a control
control = ControlDefinition(
    name="block-toxic-input",
    description="Block toxic user messages",
    enabled=True,
    execution="server",
    scope=ControlScope(
        step_types=["llm_inference"],
        stages=["pre"]
    ),
    action=ControlAction(decision="deny")
)

Evaluation Models

from agent_control_models import EvaluationRequest, EvaluationResponse

# Create evaluation request
request = EvaluationRequest(
    agent_name="agent-uuid-here",
    step=Step(
        type="llm_inference",
        name="chat",
        input="User message"
    ),
    stage="pre"
)

# Evaluation response
response = EvaluationResponse(
    allowed=True,
    violated_controls=[]
)

Models

Core Models

BaseModel

Base class for all models with common utilities:

  • model_dump(): Convert to Python dictionary (Pydantic v2)
  • model_dump_json(): Convert to JSON string (Pydantic v2)
  • model_validate(): Create from dictionary (Pydantic v2)

Configuration:

  • Accepts both snake_case and camelCase fields
  • Validates on assignment
  • JSON-compatible serialization

Agent

Agent metadata and configuration.

Fields:

  • agent_name (str): Human-readable agent name
  • agent_name (UUID): Unique identifier
  • agent_description (Optional[str]): Agent description
  • agent_version (Optional[str]): Agent version
  • tools (Optional[List[str]]): List of available tools
  • metadata (Optional[Dict]): Additional metadata

Step

Represents a single step in agent execution.

Fields:

  • type (str): Step type (e.g., "llm_inference", "tool")
  • name (str): Step name
  • input (Optional[Any]): Step input data
  • output (Optional[Any]): Step output data
  • context (Optional[Dict]): Additional context

ControlDefinition

Complete control specification.

Fields:

  • name (str): Control name
  • description (Optional[str]): Control description
  • enabled (bool): Whether control is active
  • execution (str): Execution mode ("server" or "local")
  • scope (ControlScope): When to apply the control
  • selector (ControlSelector): What data to evaluate
  • evaluator (EvaluatorSpec): How to evaluate
  • action (ControlAction): What to do on match

EvaluationRequest

Request for evaluating controls.

Fields:

  • agent_name (str): Agent identifier
  • step (Step): Step to evaluate
  • stage (str): Evaluation stage ("pre" or "post")

EvaluationResponse

Response from control evaluation.

Fields:

  • allowed (bool): Whether the step is allowed
  • violated_controls (List[str]): Names of violated controls
  • evaluation_results (Optional[List]): Detailed evaluation results

HealthResponse

Health check response.

Fields:

  • status (str): Health status ("healthy")
  • version (str): Server version

Design Patterns

1. Pydantic v2

All models use Pydantic v2 for validation and serialization:

from agent_control_models import Agent

# Create with validation
agent = Agent(
    agent_name="My Agent",
    agent_name="550e8400-e29b-41d4-a716-446655440000"
)

# Serialize to dict
agent_dict = agent.model_dump()

# Serialize to JSON
agent_json = agent.model_dump_json()

# Deserialize from dict
agent_copy = Agent.model_validate(agent_dict)

2. Type Safety

Models provide strong typing throughout the stack:

from agent_control_models import Step, EvaluationRequest

# Type-safe step creation
step = Step(
    type="llm_inference",
    name="chat",
    input="Hello"
)

# Type-safe evaluation request
request = EvaluationRequest(
    agent_name="uuid-here",
    step=step,
    stage="pre"
)

3. Extensibility

Models support additional metadata for extensibility:

from agent_control_models import Agent

# Add custom metadata
agent = Agent(
    agent_name="Support Bot",
    agent_name="550e8400-e29b-41d4-a716-446655440000",
    metadata={
        "team": "customer-success",
        "environment": "production",
        "custom_field": "value"
    }
)

Development

Adding New Models

  1. Create a new file in src/agent_control_models/
  2. Define models extending BaseModel
  3. Export in __init__.py
  4. Update both server and SDK to use the new models

Example:

# src/agent_control_models/auth.py
from .base import BaseModel

class AuthRequest(BaseModel):
    api_key: str
    
# src/agent_control_models/__init__.py
from .auth import AuthRequest

__all__ = [..., "AuthRequest"]

Testing

cd models
uv run pytest

Best Practices

  1. Always extend BaseModel: Get free JSON/dict conversion
  2. Use Field for validation: Add constraints and descriptions
  3. Keep models simple: No business logic, just data
  4. Version carefully: Model changes affect both server and SDK
  5. Document fields: Use Field's description parameter
  6. Use Optional appropriately: Mark optional fields clearly

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_control_models-6.5.0.tar.gz (20.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_control_models-6.5.0-py3-none-any.whl (25.8 kB view details)

Uploaded Python 3

File details

Details for the file agent_control_models-6.5.0.tar.gz.

File metadata

  • Download URL: agent_control_models-6.5.0.tar.gz
  • Upload date:
  • Size: 20.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agent_control_models-6.5.0.tar.gz
Algorithm Hash digest
SHA256 c6b026a5ab60d3de57797d606bd7cca7d56b0496d9e1c6056494b1d9c75ae892
MD5 d5dfbf25c7301d4615ea708610c0810b
BLAKE2b-256 e4455ae8931a90c40cbf35af44bdc0d5da82e9eb498e3845a153546584a53bdb

See more details on using hashes here.

File details

Details for the file agent_control_models-6.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_control_models-6.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a930e22eae21a1c272f5b049905d6bca74a193f53769fc4f096803213a953750
MD5 ce67436da4cb54be88736528b264d3b7
BLAKE2b-256 d3dc76feb66d768f5b12511404737580928e2ff9c5b9a294dca5b8ee7c949138

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page