AxonFlow Python SDK - Enterprise AI Governance in 3 Lines of Code
Project description
AxonFlow Python SDK
Enterprise AI Governance in 3 Lines of Code.
Installation
pip install axonflow
With LLM provider support:
pip install axonflow[openai] # OpenAI integration
pip install axonflow[anthropic] # Anthropic integration
pip install axonflow[all] # All integrations
Quick Start
Async Usage (Recommended)
import asyncio
from axonflow import AxonFlow
async def main():
async with AxonFlow(
endpoint="https://your-agent.axonflow.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
# Execute a governed query
response = await client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
asyncio.run(main())
Sync Usage
from axonflow import AxonFlow
with AxonFlow.sync(
endpoint="https://your-agent.axonflow.com",
client_id="your-client-id",
client_secret="your-client-secret"
) as client:
response = client.execute_query(
user_token="user-jwt-token",
query="What is AI governance?",
request_type="chat"
)
print(response.data)
Features
Gateway Mode
For lowest-latency LLM calls with full governance and audit compliance:
from axonflow import AxonFlow, TokenUsage
async with AxonFlow(...) as client:
# 1. Pre-check: Get policy approval
ctx = await client.get_policy_approved_context(
user_token="user-jwt",
query="Find patient records",
data_sources=["postgres"]
)
if not ctx.approved:
raise Exception(f"Blocked: {ctx.block_reason}")
# 2. Make LLM call directly (your code)
llm_response = await openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": str(ctx.approved_data)}]
)
# 3. Audit the call
await client.audit_llm_call(
context_id=ctx.context_id,
response_summary=llm_response.choices[0].message.content[:100],
provider="openai",
model="gpt-4",
token_usage=TokenUsage(
prompt_tokens=llm_response.usage.prompt_tokens,
completion_tokens=llm_response.usage.completion_tokens,
total_tokens=llm_response.usage.total_tokens
),
latency_ms=250
)
OpenAI Integration
Transparent governance for existing OpenAI code:
from openai import OpenAI
from axonflow import AxonFlow
from axonflow.interceptors.openai import wrap_openai_client
openai = OpenAI()
axonflow = AxonFlow(...)
# Wrap client - governance is now automatic
wrapped = wrap_openai_client(openai, axonflow, user_token="user-123")
# Use as normal
response = wrapped.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
MCP Connectors
Query data through MCP connectors:
# List available connectors
connectors = await client.list_connectors()
# Query a connector
result = await client.query_connector(
user_token="user-jwt",
connector_name="postgres",
operation="query",
params={"sql": "SELECT * FROM users LIMIT 10"}
)
Multi-Agent Planning
Generate and execute multi-agent plans:
# Generate a plan
plan = await client.generate_plan(
query="Book a flight and hotel for my trip to Paris",
domain="travel"
)
print(f"Plan has {len(plan.steps)} steps")
# Execute the plan
result = await client.execute_plan(plan.plan_id)
print(f"Result: {result.result}")
Configuration
from axonflow import AxonFlow, Mode, RetryConfig
client = AxonFlow(
endpoint="https://your-agent.axonflow.com",
client_id="your-client-id", # Required for enterprise features
client_secret="your-client-secret", # Required for enterprise features
mode=Mode.PRODUCTION, # or Mode.SANDBOX
debug=True, # Enable debug logging
timeout=60.0, # Request timeout in seconds
retry_config=RetryConfig( # Retry configuration
enabled=True,
max_attempts=3,
initial_delay=1.0,
max_delay=30.0,
),
cache_enabled=True, # Enable response caching
cache_ttl=60.0, # Cache TTL in seconds
)
Error Handling
from axonflow.exceptions import (
AxonFlowError,
PolicyViolationError,
AuthenticationError,
RateLimitError,
TimeoutError,
)
try:
response = await client.execute_query(...)
except PolicyViolationError as e:
print(f"Blocked by policy: {e.block_reason}")
except RateLimitError as e:
print(f"Rate limited: {e.limit}/{e.remaining}, resets at {e.reset_at}")
except AuthenticationError:
print("Invalid credentials")
except TimeoutError:
print("Request timed out")
except AxonFlowError as e:
print(f"AxonFlow error: {e.message}")
Response Types
All responses are Pydantic models with full type hints:
from axonflow import (
ClientResponse,
PolicyApprovalResult,
PlanResponse,
ConnectorResponse,
)
# Full autocomplete and type checking support
response: ClientResponse = await client.execute_query(...)
print(response.success)
print(response.data)
print(response.policy_info.policies_evaluated)
Development
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run linting
ruff check .
ruff format .
# Run type checking
mypy axonflow
Documentation
License
MIT - See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file axonflow-1.3.0.tar.gz.
File metadata
- Download URL: axonflow-1.3.0.tar.gz
- Upload date:
- Size: 85.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
628f5a3eab4840e7ffd10f15bb5dd3bea585974035c6cfa35d4f3c79b9833e92
|
|
| MD5 |
eaed081d80517e68a249028069b77d4f
|
|
| BLAKE2b-256 |
ce71c0203bd62407737e83cd403331be3ad2113512089538fb277d534f3db148
|
Provenance
The following attestation bundles were made for axonflow-1.3.0.tar.gz:
Publisher:
release.yml on getaxonflow/axonflow-sdk-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
axonflow-1.3.0.tar.gz -
Subject digest:
628f5a3eab4840e7ffd10f15bb5dd3bea585974035c6cfa35d4f3c79b9833e92 - Sigstore transparency entry: 810248625
- Sigstore integration time:
-
Permalink:
getaxonflow/axonflow-sdk-python@ffcce4cf4652e94823d4cfa1ffaa935153469823 -
Branch / Tag:
refs/tags/v1.3.0 - Owner: https://github.com/getaxonflow
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@ffcce4cf4652e94823d4cfa1ffaa935153469823 -
Trigger Event:
push
-
Statement type:
File details
Details for the file axonflow-1.3.0-py3-none-any.whl.
File metadata
- Download URL: axonflow-1.3.0-py3-none-any.whl
- Upload date:
- Size: 52.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4ec5291799df92dd7a54601030f93207f88c4758d854c9fe1d6f3028b11d82a0
|
|
| MD5 |
3d19f5bfb69bbe2eebef98f20d7a41b3
|
|
| BLAKE2b-256 |
1838fe1b539249f50450b0da296498780b2d01ca8a47f07d53fe65c8d8bea4a2
|
Provenance
The following attestation bundles were made for axonflow-1.3.0-py3-none-any.whl:
Publisher:
release.yml on getaxonflow/axonflow-sdk-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
axonflow-1.3.0-py3-none-any.whl -
Subject digest:
4ec5291799df92dd7a54601030f93207f88c4758d854c9fe1d6f3028b11d82a0 - Sigstore transparency entry: 810248627
- Sigstore integration time:
-
Permalink:
getaxonflow/axonflow-sdk-python@ffcce4cf4652e94823d4cfa1ffaa935153469823 -
Branch / Tag:
refs/tags/v1.3.0 - Owner: https://github.com/getaxonflow
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@ffcce4cf4652e94823d4cfa1ffaa935153469823 -
Trigger Event:
push
-
Statement type: