Foundation utilities and host framework for Azure AI Hosted Agents
Project description
Azure AI Agent Server Core client library for Python
The azure-ai-agentserver-core package provides the foundation host framework for building Azure AI Hosted Agent containers. It handles the protocol-agnostic infrastructure — health probes, graceful shutdown, OpenTelemetry tracing, and ASGI serving — so that protocol packages can focus on their endpoint logic.
Getting started
Install the package
pip install azure-ai-agentserver-core
OpenTelemetry tracing with Azure Monitor and OTLP exporters is included by default.
Prerequisites
- Python 3.10 or later
Key concepts
AgentServerHost
AgentServerHost is the host process for Azure AI Hosted Agent containers. It provides:
- Health probe —
GET /readinessreturns200 OKwhen the server is ready. - Graceful shutdown — On
SIGTERMthe server drains in-flight requests (default 30 s timeout) before exiting. - OpenTelemetry tracing — Automatic span creation with Azure Monitor and OTLP export when configured.
- Hypercorn ASGI server — Serves on
0.0.0.0:${PORT:-8088}with HTTP/1.1.
Protocol packages (e.g. azure-ai-agentserver-invocations) subclass AgentServerHost and add their endpoints in __init__.
Environment variables
| Variable | Description | Default |
|---|---|---|
PORT |
Listen port | 8088 |
FOUNDRY_AGENT_NAME |
Agent name (used in tracing) | "" |
FOUNDRY_AGENT_VERSION |
Agent version (used in tracing) | "" |
FOUNDRY_PROJECT_ENDPOINT |
Azure AI Foundry project endpoint | "" |
FOUNDRY_PROJECT_ARM_ID |
Foundry project ARM resource ID (used in tracing) | "" |
FOUNDRY_AGENT_SESSION_ID |
Default session ID when not provided per-request | "" |
APPLICATIONINSIGHTS_CONNECTION_STRING |
Azure Monitor connection string | — |
OTEL_EXPORTER_OTLP_ENDPOINT |
OTLP collector endpoint | — |
Examples
AgentServerHost is typically used via a protocol package. The simplest setup with the invocations protocol:
from azure.ai.agentserver.invocations import InvocationAgentServerHost
from starlette.responses import JSONResponse
app = InvocationAgentServerHost()
@app.invoke_handler
async def handle(request):
body = await request.json()
return JSONResponse({"greeting": f"Hello, {body['name']}!"})
app.run()
Subclassing AgentServerHost
For custom protocol implementations, subclass AgentServerHost and add routes:
from azure.ai.agentserver.core import AgentServerHost
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.routing import Route
class MyAgentHost(AgentServerHost):
def __init__(self, **kwargs):
my_routes = [Route("/my-endpoint", self._handle, methods=["POST"])]
existing = list(kwargs.pop("routes", None) or [])
super().__init__(routes=existing + my_routes, **kwargs)
async def _handle(self, request: Request):
return JSONResponse({"status": "ok"})
app = MyAgentHost()
app.run()
Shutdown handler
Register a cleanup function that runs during graceful shutdown:
app = AgentServerHost()
@app.shutdown_handler
async def on_shutdown():
# Close database connections, flush buffers, etc.
pass
Configuring tracing
Tracing is enabled automatically when an Application Insights connection string is available:
app = AgentServerHost(
applicationinsights_connection_string="InstrumentationKey=...",
)
Or via environment variable:
export APPLICATIONINSIGHTS_CONNECTION_STRING="InstrumentationKey=..."
python my_agent.py
Troubleshooting
Logging
Set the log level to DEBUG for detailed diagnostics:
app = AgentServerHost(log_level="DEBUG")
Reporting issues
To report an issue with the client library, or request additional features, please open a GitHub issue here.
Next steps
- Install
azure-ai-agentserver-invocationsto add the invocation protocol endpoints. - See the container image spec for the full hosted agent contract.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file azure_ai_agentserver_core-2.0.0b1.tar.gz.
File metadata
- Download URL: azure_ai_agentserver_core-2.0.0b1.tar.gz
- Upload date:
- Size: 37.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: RestSharp/106.13.0.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a762186a027586f5c365c096c3fc6ae7dac3b53a3b00b523f0fd9c1d9b8e7bc7
|
|
| MD5 |
189c43ee2346824f6419ceab17f79f27
|
|
| BLAKE2b-256 |
50a58007cc0cbb004290998182f123d6151676fd3cdefaf0bddb2394a6a98278
|
File details
Details for the file azure_ai_agentserver_core-2.0.0b1-py3-none-any.whl.
File metadata
- Download URL: azure_ai_agentserver_core-2.0.0b1-py3-none-any.whl
- Upload date:
- Size: 24.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: RestSharp/106.13.0.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
85c3e4470c30451bc122aa7fe009778514b6f0d52897562f4104ce5f9e382ba5
|
|
| MD5 |
bbab2cf5f24437de0523b61b6759a876
|
|
| BLAKE2b-256 |
a11ce917df68bb92239816f5bf177d6885a04c27a51d77f23608b49c60654325
|