OpenAI adapter for governing tool execution with BIGHUB.
Project description
bighub-openai - Production-Safe OpenAI Agents
bighub-openai makes OpenAI tool-calling agents production-safe.
Where it fits:
OpenAI tool call -> bighub-openai -> BIGHUB policies -> execute/block/approve
bighub-openai depends on both bighub and the official openai Python SDK.
It is built for OpenAI Python SDK v1+ (openai>=1.0.0,<2.0.0).
Before any registered tool executes, the adapter:
- Validates the action via BIGHUB
- Enforces policy boundaries
- Blocks or escalates risky decisions
- Ingests governed execution into Future Memory
Decision enforcement outcomes:
allowed-> execute toolblocked-> do not execute toolrequires_approval-> do not execute tool; adapter output status isapproval_required
Install
pip install bighub-openai
Requires Python 3.9+.
Quickstart (5 lines)
from bighub_openai import GuardedOpenAI
def refund_payment(order_id: str, amount: float) -> dict:
return {"ok": True, "order_id": order_id, "amount": amount}
guard = GuardedOpenAI(
openai_api_key="sk-...",
bighub_api_key="bhk_...",
actor="AI_AGENT_001",
domain="payments",
)
guard.tool("refund_payment", refund_payment, value_from_args=lambda a: a["amount"])
response = guard.run(
messages=[
{"role": "user", "content": "Refund order ord_123 for 199.99"},
],
model="gpt-4.1",
)
print(response)
Async quickstart
from bighub_openai import AsyncGuardedOpenAI
guard = AsyncGuardedOpenAI(
openai_api_key="sk-...",
bighub_api_key="bhk_...",
actor="AI_AGENT_001",
domain="payments",
)
guard.tool(...) auto-generates a strict JSON schema from your Python function signature.
Provide parameters_schema=... only when you need custom schema constraints.
run(...) returns a structured payload with both model output and governance execution events:
{
"llm_response": {...},
"execution": {
"events": [...],
"last": {
"tool": "refund_payment",
"status": "executed" | "blocked" | "approval_required",
"decision": {...}
}
}
}
run_stream(...) yields structured stream events:
llm_delta(provider text delta)execution_event(governed tool decision/execution result)final_response(same payload shape asrun(...))
Example:
for event in guard.run_stream(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
):
if event["type"] == "llm_delta":
print(event["delta"], end="")
elif event["type"] == "execution_event":
print("\n[tool]", event["event"]["tool"], event["event"]["status"])
elif event["type"] == "final_response":
print("\nDone:", event["response"]["output_text"])
Async example:
async for event in guard.run_stream(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
):
if event["type"] == "llm_delta":
print(event["delta"], end="")
elif event["type"] == "execution_event":
print("\n[tool]", event["event"]["tool"], event["event"]["status"])
elif event["type"] == "final_response":
print("\nDone:", event["response"]["output_text"])
Decision modes
decision_mode="submit"(default) -> callsclient.actions.submit(...)decision_mode="submit_v2"-> callsclient.actions.submit_v2(...)
You can set it globally on GuardedOpenAI(...) or per tool in register_tool(..., decision_mode="submit_v2").
Audit hook
Use on_decision to forward structured events to your observability stack:
guard = GuardedOpenAI(..., on_decision=lambda event: print(event))
Event payload contract includes stable identifiers:
trace_id(run correlation id)request_id(BIGHUB validation id when available)event_id(adapter event id)
Silent mode
When you want to evaluate governance without executing tools:
decision = guard.check_tool("refund_payment", {"order_id": "ord_123", "amount": 199.0})
Approval loop helper
Use HITL helpers when a tool decision returns requires_approval:
run_with_approval(...)runs, captures pending approval, and can resume with a callbackresume_after_approval(...)resolves one approval request then resumes tool execution
Approval callbacks should run server-side (not in the client) to avoid exposing approval credentials.
Example:
result = guard.run_with_approval(
messages=[{"role": "user", "content": "Refund order ord_123 for 5000"}],
model="gpt-4.1",
on_approval_required=lambda ctx: {
"resolution": "approved",
"comment": "approved by on-call",
},
)
print(result["approval_loop"])
Async example:
result = await guard.run_with_approval(
messages=[{"role": "user", "content": "Refund order ord_123 for 5000"}],
model="gpt-4.1",
on_approval_required=lambda ctx: {
"resolution": "approved",
"comment": "approved by on-call",
},
)
print(result["approval_loop"])
Future memory ingest
By default, GuardedOpenAI ingests governed execution events to BIGHUB future memory:
guard = GuardedOpenAI(..., memory_enabled=True, memory_source="openai_adapter")
The adapter ingests memory in best-effort mode:
- short timeout (
memory_ingest_timeout_ms, default300) - exceptions are swallowed
- governance execution path is never blocked by telemetry
Each event includes idempotency/versioning metadata for stable analytics:
event_id(dedupe key)seq(position within run)schema_version(current:1)source_version(for examplebighub-openai@0.1.x)
This powers pattern learning and context endpoints such as
client.actions.memory_context(...).
Fail Modes
fail_mode="closed"(default): if policy check fails, tool execution is blocked.fail_mode="open": if policy check fails unexpectedly, tool execution proceeds.
Provider resilience knobs
GuardedOpenAI and AsyncGuardedOpenAI expose provider resilience settings:
provider_timeout_secondsprovider_max_retriesprovider_retry_backoff_secondsprovider_retry_max_backoff_secondsprovider_retry_jitter_secondsprovider_circuit_breaker_failures(set>0to enable)provider_circuit_breaker_reset_seconds
Example:
guard = GuardedOpenAI(
openai_api_key="sk-...",
bighub_api_key="bhk_...",
actor="AI_AGENT_001",
domain="payments",
provider_timeout_seconds=20,
provider_max_retries=3,
provider_retry_backoff_seconds=0.2,
provider_retry_max_backoff_seconds=2.0,
provider_retry_jitter_seconds=0.15,
provider_circuit_breaker_failures=5,
provider_circuit_breaker_reset_seconds=30,
)
Notes
- This adapter is intentionally provider-specific.
- Core policy and transport behavior remain in the
bighubSDK.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bighub_openai-0.1.0.tar.gz.
File metadata
- Download URL: bighub_openai-0.1.0.tar.gz
- Upload date:
- Size: 14.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d2b9df7af4777fbe201305d4074719ece634fd724cf5add614e343882c6ad359
|
|
| MD5 |
5d7982f402f72bc6d7d8f970e25acf75
|
|
| BLAKE2b-256 |
3f348bea3a858979579f16bf8ea1daf9e3518561202e7f0fcb21456f6049bc94
|
File details
Details for the file bighub_openai-0.1.0-py3-none-any.whl.
File metadata
- Download URL: bighub_openai-0.1.0-py3-none-any.whl
- Upload date:
- Size: 11.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
693e4258fab31e7c390d98b2ef02e259d2ea73015af0bde15ae69936c73aa856
|
|
| MD5 |
7551df95474982a5c2e5ef7ee37a4924
|
|
| BLAKE2b-256 |
4d54a10ef02c6aeab50c92903ae7446523b4767ce0cda09d3cb2c14462c07acf
|