OpenAI adapter for decision learning for AI agent actions with BIGHUB.
Project description
bighub-openai
OpenAI adapter for decision learning on tool calls.
bighub-openai connects the OpenAI Responses API to BIGHUB so tool calls can be evaluated before execution and learned from after execution.
OpenAI Responses API -> bighub-openai -> BIGHUB
tool call -> evaluate -> execute / block / approval
real outcome -> report -> learn from similar cases
Install
pip install bighub-openai
Requires Python 3.9+.
Dependencies:
bighub>=3.0.0,<4.0.0openai>=2.0.0,<3.0.0
Quickstart
import os
from bighub_openai import BighubOpenAI
def refund_payment(order_id: str, amount: float) -> dict:
return {"ok": True, "order_id": order_id, "amount": amount}
guard = BighubOpenAI(
openai_api_key=os.getenv("OPENAI_API_KEY"),
bighub_api_key=os.getenv("BIGHUB_API_KEY"),
actor="AI_AGENT_001",
domain="customer_transactions",
)
guard.tool(
"refund_payment",
refund_payment,
value_from_args=lambda a: a["amount"],
)
response = guard.run(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
)
print(response["execution"]["last"]["status"])
# executed | blocked | approval_required
guard.tool(...) auto-generates a strict JSON schema from the Python function signature. Use parameters_schema=... only when you need custom constraints.
How It Works
For every tool call, the adapter follows the same loop:
- The model proposes a tool call
- The adapter captures action + arguments + actor + domain
- BIGHUB evaluates the action in context
- A decision is returned (
allowed,blocked,requires_approval) - If allowed, the tool executes
- The action can later be linked to its real outcome
- Outcome feedback means future similar tool calls are judged with more experience
Response Shape
{
"llm_response": {...},
"execution": {
"events": [...],
"last": {
"tool": "refund_payment",
"status": "executed", # executed | blocked | approval_required
"decision": {
"allowed": True,
"result": "allowed",
"reason": "Matched positive outcomes from similar refund decisions",
"request_id": "act_abc123",
"requires_approval": False,
"risk_score": 0.21
}
}
}
}
The exact decision payload can include additional backend fields. In most integrations, the key signals are:
- whether execution is allowed now
- whether human approval is required
- how the action was judged from past experience
Decision vs execution naming:
- Decision result:
allowed/blocked/requires_approval - Execution status:
executed/blocked/approval_required
Streaming
for event in guard.run_stream(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
):
if event["type"] == "llm_delta":
print(event["delta"], end="")
elif event["type"] == "execution_event":
print("\n[decision]", event["event"]["tool"], event["event"]["status"])
elif event["type"] == "final_response":
print("\nDone:", event["response"]["output_text"])
| Event type | Description |
|---|---|
llm_delta |
Incremental text token |
llm_text_done |
Complete text segment |
execution_event |
Tool decision or execution result |
final_response |
Final payload, same shape as run() |
response_done |
Response finished |
response_failed |
Response error |
Async
from bighub_openai import AsyncBighubOpenAI
guard = AsyncBighubOpenAI(
openai_api_key=os.getenv("OPENAI_API_KEY"),
bighub_api_key=os.getenv("BIGHUB_API_KEY"),
actor="AI_AGENT_001",
domain="customer_transactions",
)
response = await guard.run(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
)
Human-in-the-Loop Approvals
result = guard.run_with_approval(
messages=[{"role": "user", "content": "Refund order ord_123 for 5000"}],
model="gpt-4.1",
on_approval_required=lambda ctx: {
"resolution": "approved",
"comment": "approved by on-call",
},
)
Run approval callbacks server-side, not in clients, to avoid exposing approval credentials.
Links
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bighub_openai-3.0.1.tar.gz.
File metadata
- Download URL: bighub_openai-3.0.1.tar.gz
- Upload date:
- Size: 22.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f8186735d82a23a91a57256cae6f46bc9fd35acfb8ef072c5ce5692b5ad89e70
|
|
| MD5 |
07a452bddd4377e01acc169e14411e62
|
|
| BLAKE2b-256 |
6d00e04a40cb921f9ff060e6a3998f305be0f84c2da8dd2aae4bcaf291f3f368
|
File details
Details for the file bighub_openai-3.0.1-py3-none-any.whl.
File metadata
- Download URL: bighub_openai-3.0.1-py3-none-any.whl
- Upload date:
- Size: 16.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c3d2a58f964e912808ca0a9e0239bfb73bebb5b32c90380e522f77e47055c23b
|
|
| MD5 |
3b71291fe24e43bce0384b2e15c6a983
|
|
| BLAKE2b-256 |
41ea646660c3aa8f64950376e0f8d54f6c7ba40e63d7b16afbc48832afbc7b3a
|