SDK for creating and managing OpenAI agents with MCP server integration
Project description
Semantic Kernel Agent Factory
A comprehensive SDK for creating and managing AI agents powered by Microsoft Semantic Kernel with MCP (Model Context Protocol) server integration. Build sophisticated conversational agents with tool integration, deploy them as web services, or interact with them through a rich terminal interface.
Features
- 🤖 Agent Factory: Create and manage multiple Semantic Kernel-based agents with different configurations
- 🔗 MCP Integration: Connect agents to external tools via Model Context Protocol (stdio and SSE servers)
- 🖥️ Interactive Console: Rich terminal-based chat interface with multi-agent support powered by Textual
- 🌐 Web Service Factory: Deploy agents as HTTP/REST APIs with A2A (Agent-to-Agent) protocol support
- ⚡ Streaming Support: Real-time response streaming for both console and web interfaces
- 📊 Health Monitoring: Built-in MCP server health checks and status monitoring
- 🔧 Flexible Configuration: YAML-based configuration with environment variable support
- 📈 Observability: OpenTelemetry integration for tracing and monitoring
Installation
Basic Installation
# Install core functionality only
pip install semantic-kernel-agent-factory
Installation with Optional Features
# For development (includes testing, linting, and type checking tools)
pip install semantic-kernel-agent-factory[dev]
# For web service deployment
pip install semantic-kernel-agent-factory[web]
# For enhanced monitoring and observability
pip install semantic-kernel-agent-factory[monitoring]
# For documentation generation
pip install semantic-kernel-agent-factory[docs]
# Install multiple feature sets
pip install semantic-kernel-agent-factory[dev,web,monitoring]
Development Installation
For local development:
# Clone the repository
git clone https://github.com/jhzhu89/semantic-kernel-agent-factory
cd semantic-kernel-agent-factory
# Install in editable mode with development dependencies
pip install -e ".[dev]"
Quick Start
1. Console Application
Create a configuration file config.yaml:
agent_factory:
agents:
GeneralAssistant:
name: "GeneralAssistant"
instructions: |
You are a helpful AI assistant.
Answer questions clearly and concisely.
model: "gpt-4"
model_settings:
temperature: 0.7
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
Run the interactive console:
agent-factory chat -c config.yaml
2. Python API - Agent Factory
import asyncio
from agent_factory import AgentFactory, AgentFactoryConfig, AgentConfig
async def main():
# Create configuration
config = AgentFactoryConfig(
agents={
"assistant": AgentConfig(
name="assistant",
instructions="You are a helpful AI assistant",
model="gpt-4"
)
},
openai_models={
"gpt-4": {
"model": "gpt-4",
"api_key": "your-api-key",
"endpoint": "your-endpoint"
}
}
)
# Create and use agents
async with AgentFactory(config) as factory:
agent = factory.get_agent("assistant")
# Use the agent for conversations
asyncio.run(main())
3. Web Service Deployment
Create a service configuration service_config.yaml:
agent_factory:
agents:
ChatBot:
name: "ChatBot"
instructions: "You are a helpful chatbot"
model: "gpt-4"
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
service_factory:
services:
ChatBot:
card:
name: "ChatBot"
description: "AI-powered chatbot service"
enable_token_streaming: true
Deploy as web service:
from agent_factory import AgentServiceFactory, AgentServiceFactoryConfig
import uvicorn
async def create_app():
config = AgentServiceFactoryConfig.from_file("service_config.yaml")
async with AgentServiceFactory(config) as factory:
app = await factory.create_application()
return app
if __name__ == "__main__":
uvicorn.run("main:create_app", host="0.0.0.0", port=8000)
MCP Server Integration
Connect agents to external tools using Model Context Protocol servers:
agent_factory:
agents:
ToolAgent:
name: "ToolAgent"
instructions: "You have access to various tools"
model: "gpt-4"
mcp_servers: ["time", "kubernetes"]
mcp_servers:
time:
type: "stdio"
command: "python"
args: ["-m", "mcp_server_time"]
kubernetes:
type: "sse"
url: "https://k8s-mcp-server.example.com/sse"
timeout: 10
Console Features
The interactive console provides:
- Multi-Agent Chat: Switch between different agents in tabbed interface
- Real-time Streaming: See responses as they're generated
- MCP Status Monitoring: Live health checks of connected MCP servers
- Function Call Visibility: See tool calls and results in real-time
- Keyboard shortcuts:
Ctrl+Enter: Send messageCtrl+L: Clear chatF1: Toggle agent panelF2: Toggle logsCtrl+W: Close tab
Configuration
Agent Configuration
agent_factory:
agents:
MyAgent:
name: "MyAgent"
instructions: "System prompt for the agent"
model: "gpt-4"
model_settings:
temperature: 0.7
max_tokens: 2000
response_json_schema: # Optional structured output
type: "object"
properties:
answer:
type: "string"
mcp_servers: ["tool1", "tool2"]
OpenAI Models
agent_factory:
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
gpt-3.5-turbo:
model: "gpt-3.5-turbo"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
MCP Server Types
Stdio servers (local processes):
mcp_servers:
local_tool:
type: "stdio"
command: "python"
args: ["-m", "my_mcp_server"]
env:
DEBUG: "true"
SSE servers (HTTP-based):
mcp_servers:
remote_tool:
type: "sse"
url: "https://api.example.com/mcp"
timeout: 15
CLI Commands
# Start interactive chat
agent-factory chat -c config.yaml
# List configured agents
agent-factory list -c config.yaml
# Enable verbose logging
agent-factory chat -c config.yaml --verbose
# Custom log directory
agent-factory chat -c config.yaml --log-dir /path/to/logs
Environment Variables
Configure using environment variables:
# OpenAI Configuration
export OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
# Agent Factory Settings
export AGENT_FACTORY__MODEL_SELECTION="cost" # first, cost, latency, quality
export AGENT_FACTORY__MCP_FAILURE_STRATEGY="lenient" # strict, lenient
Examples
See the examples/ directory for:
cli_example.yaml- Console application setupagent_service_factory_config.yaml- Web service configurationweb_service.py- Web service deployment example
Development
Quick Setup
# Clone repository
git clone https://github.com/jhzhu89/semantic-kernel-agent-factory
cd semantic-kernel-agent-factory
# Install in editable mode with all development dependencies
pip install -e ".[dev]"
Available Development Tools
The [dev] extra includes:
- Testing: pytest, pytest-asyncio, pytest-cov, pytest-mock
- Code Formatting: black, isort, ruff
- Type Checking: mypy with type stubs
- Coverage: pytest-cov for test coverage reports
Development Commands
# Run tests
pytest
# Run tests with coverage
pytest --cov=agent_factory --cov-report=html
# Format code
black .
isort .
# Lint code
ruff check .
# Type checking
mypy agent_factory
# Run all quality checks
make test-cov # Runs tests with coverage
make format # Formats code
make type-check # Type checking
Optional Development Features
# Install with all development tools
pip install -e ".[dev]"
# For web service development
pip install -e ".[dev,web]"
# Install monitoring tools for development
pip install -e ".[dev,monitoring]"
# Install documentation tools
pip install -e ".[dev,docs]"
Architecture
The Semantic Kernel Agent Factory consists of several key components:
- AgentFactory: Core factory for creating and managing Semantic Kernel agents
- AgentServiceFactory: Web service wrapper that exposes agents as HTTP APIs (requires
[web]extra) - MCPProvider: Manages connections to Model Context Protocol servers
- Console Application: Terminal-based interface for interactive agent chat
- Configuration System: YAML-based configuration with validation
- Observability: OpenTelemetry integration for monitoring (enhanced with
[monitoring]extra)
Optional Components
Different installation options enable additional features:
[web]: FastAPI-based web services, uvicorn server, HTTP clients[monitoring]: Enhanced observability with Prometheus metrics and OpenTelemetry instrumentation[docs]: Sphinx-based documentation generation[dev]: Development tools for testing, linting, and type checking
Requirements
- Python 3.10+
- Microsoft Semantic Kernel
- Azure OpenAI or OpenAI API access
- Optional: MCP-compatible tool servers
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Install development dependencies:
pip install -e ".[dev]" - Add tests for new functionality
- Run the test suite and ensure all checks pass:
# Run tests pytest # Format code black . isort . # Lint code ruff check . # Type checking mypy agent_factory
- Submit a pull request
Development Environment Setup
# Install with all development tools
pip install -e ".[dev]"
# For web service development
pip install -e ".[dev,web]"
# For full development environment
pip install -e ".[dev,web,monitoring,docs]"
Project Structure
agent_factory/- Core library codetests/- Test suiteexamples/- Usage examplesdocs/- Documentation (when using[docs]extra)
Code Quality Standards
This project uses:
- Black for code formatting
- isort for import sorting
- Ruff for linting
- mypy for type checking
- pytest for testing with >80% coverage requirement
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support & Documentation
Related Projects
- Microsoft Semantic Kernel
- Model Context Protocol
- Textual - Powers the console interface
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file semantic_kernel_agent_factory-0.0.2-py3-none-any.whl.
File metadata
- Download URL: semantic_kernel_agent_factory-0.0.2-py3-none-any.whl
- Upload date:
- Size: 44.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
17316eee22854879bfca13d9e1896fce4e7c1dad65af4efd910707fce52b48a2
|
|
| MD5 |
af30c6d655f7d798d4a4227e1ea5953a
|
|
| BLAKE2b-256 |
c6d7092c3e0e3288721db260dd8f3481ffc044ed1c290706769cddf5b019e5fb
|
Provenance
The following attestation bundles were made for semantic_kernel_agent_factory-0.0.2-py3-none-any.whl:
Publisher:
python-publish.yml on jhzhu89/semantic-kernel-agent-factory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
semantic_kernel_agent_factory-0.0.2-py3-none-any.whl -
Subject digest:
17316eee22854879bfca13d9e1896fce4e7c1dad65af4efd910707fce52b48a2 - Sigstore transparency entry: 232428144
- Sigstore integration time:
-
Permalink:
jhzhu89/semantic-kernel-agent-factory@87770a4ef294f8afde0511f42f22009f74bd1475 -
Branch / Tag:
refs/tags/v0.0.2 - Owner: https://github.com/jhzhu89
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@87770a4ef294f8afde0511f42f22009f74bd1475 -
Trigger Event:
release
-
Statement type: