OpenAI adapter for decision learning for AI agent actions with BIGHUB.
Project description
bighub-openai
OpenAI adapter for decision learning on tool calls.
bighub-openai connects the OpenAI Responses API to BIGHUB so tool calls are evaluated before execution, receive structured recommendations, and learn from real outcomes automatically.
OpenAI Responses API → bighub-openai → BIGHUB
tool call → evaluate → recommendation + confidence + rationale
agent / runtime acts → execution or escalation
real outcome → report (automatic) → future recommendations improve
Install
pip install bighub-openai
Requires Python 3.9+.
Dependencies:
bighub>=3.1.0,<4.0.0openai>=2.0.0,<3.0.0
Quickstart
import os
from bighub_openai import BighubOpenAI
def refund_payment(order_id: str, amount: float) -> dict:
return {"ok": True, "order_id": order_id, "amount": amount}
runtime = BighubOpenAI(
openai_api_key=os.getenv("OPENAI_API_KEY"),
bighub_api_key=os.getenv("BIGHUB_API_KEY"),
actor="AI_AGENT_001",
domain="customer_transactions",
)
runtime.tool(
"refund_payment",
refund_payment,
value_from_args=lambda a: a["amount"],
)
response = runtime.run(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
)
last = response["execution"]["last"]
print(last["decision"]["recommendation"]) # proceed, proceed_with_caution, review_recommended, do_not_proceed
print(last["decision"]["recommendation_confidence"]) # high, medium, low
print(last["decision"]["risk_score"]) # 0.0 – 1.0
print(last["status"]) # executed, blocked, approval_required
runtime.tool(...) auto-generates a strict JSON schema from the Python function signature. Use parameters_schema=... only when you need custom constraints.
How It Works
For every tool call, the adapter follows the same loop:
- The model proposes a tool call
- The adapter captures action, arguments, actor, and domain
- BIGHUB evaluates the action in context
- A structured recommendation is returned
- The adapter decides how to handle execution based on mode:
- advisory — surfaces the recommendation; the agent executes by default
- review — requires approval or escalation before execution
- enforced — applies runtime constraints when configured
- If the tool executes, the outcome is automatically reported back to BIGHUB
- Outcome feedback means future similar tool calls receive better recommendations
Response Shape
{
"llm_response": {...},
"execution": {
"events": [...],
"last": {
"tool": "refund_payment",
"status": "executed",
"decision": {
"recommendation": "proceed_with_caution",
"recommendation_confidence": "medium",
"risk_score": 0.21,
"enforcement_mode": "advisory",
"decision_intelligence": {
"rationale": "Matched positive outcomes from similar refund decisions",
"evidence_status": "sufficient",
"trajectory_health": "healthy"
},
"request_id": "act_abc123"
}
}
}
}
Primary decision signals
| Field | Description |
|---|---|
recommendation |
proceed, proceed_with_caution, review_recommended, do_not_proceed |
recommendation_confidence |
high, medium, low |
risk_score |
Aggregated risk (0–1) |
enforcement_mode |
advisory, review, enforced |
decision_intelligence |
Rationale, evidence status, trajectory health, alternatives |
Execution statuses
| Status | Description |
|---|---|
executed |
Tool ran successfully |
blocked |
BIGHUB or runtime prevented execution |
approval_required |
Waiting for human approval |
tool_error |
Tool raised an exception during execution |
Legacy fields such as allowed, result, and reason may still be present for backward compatibility, but they are not the primary product surface.
Configuration
Constructor parameters
runtime = BighubOpenAI(
# Required
bighub_api_key="bh_live_xxx",
actor="AI_AGENT_001",
domain="customer_transactions",
# OpenAI (one of these is required)
openai_api_key="sk-xxx", # or pass your own client:
openai_client=my_openai_client, # pre-configured OpenAI() instance
# Decision behavior
decision_mode="submit", # "submit" (default) or "submit_payload"
fail_mode="closed", # "closed" = block on BIGHUB errors, "open" = allow on errors
max_tool_rounds=8, # max consecutive tool call rounds
# Outcome & memory (automatic)
outcome_reporting=True, # auto-report tool execution results
memory_enabled=True, # ingest decision memory events
on_decision=my_callback, # called after each BIGHUB decision
# Provider resilience
provider_timeout_seconds=30.0,
provider_max_retries=2,
provider_retry_backoff_seconds=0.25,
provider_circuit_breaker_failures=0, # 0 = disabled
evaluate_retries=2,
)
fail_mode
| Mode | Behavior when BIGHUB is unreachable |
|---|---|
closed (default) |
Block execution — fail safe |
open |
Allow execution — fail open |
Registering tools
Basic
runtime.tool("send_email", send_email)
With value and target extraction
runtime.tool(
"transfer_funds",
transfer_funds,
value_from_args=lambda a: a["amount"],
target_from_args=lambda a: a["recipient_id"],
)
Per-tool overrides
runtime.tool(
"delete_account",
delete_account,
domain="account_management", # override adapter-level domain
actor="admin_agent", # override adapter-level actor
action_name="account_deletion", # custom action name for BIGHUB
decision_mode="submit_payload", # per-tool decision mode
metadata_from_args=lambda a: {"priority": "high"},
)
Custom JSON schema
runtime.tool(
"approve_loan",
approve_loan,
parameters_schema={
"type": "object",
"properties": {
"loan_id": {"type": "string"},
"amount": {"type": "number", "minimum": 0},
},
"required": ["loan_id", "amount"],
"additionalProperties": False,
},
strict=True,
)
Full API: register_tool()
runtime.register_tool(
name="refund_payment",
fn=refund_payment,
description="Process a customer refund",
parameters_schema={...},
value_from_args=lambda a: a["amount"],
target_from_args=lambda a: a["order_id"],
action_name="refund",
domain="payments",
actor="refund_bot",
metadata_from_args=lambda a: {"source": "support_ticket"},
decision_mode="submit",
strict=True,
)
Streaming
for event in runtime.run_stream(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
):
if event["type"] == "llm_delta":
print(event["delta"], end="")
elif event["type"] == "execution_event":
print("\n[decision]", event["event"]["tool"], event["event"]["status"])
elif event["type"] == "final_response":
print("\nDone:", event["response"]["output_text"])
| Event type | Description |
|---|---|
llm_delta |
Incremental text token |
llm_text_done |
Complete text segment |
execution_event |
Tool recommendation and execution result |
final_response |
Final payload, same shape as run() |
response_done |
Response finished |
response_failed |
Response error |
Async
from bighub_openai import AsyncBighubOpenAI
async with AsyncBighubOpenAI(
openai_api_key=os.getenv("OPENAI_API_KEY"),
bighub_api_key=os.getenv("BIGHUB_API_KEY"),
actor="AI_AGENT_001",
domain="customer_transactions",
) as runtime:
runtime.tool("refund_payment", refund_payment, value_from_args=lambda a: a["amount"])
response = await runtime.run(
messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
model="gpt-4.1",
)
# Async streaming
async for event in runtime.run_stream(
messages=[{"role": "user", "content": "Refund order ord_456"}],
model="gpt-4.1",
):
if event["type"] == "llm_delta":
print(event["delta"], end="")
Human-in-the-loop approvals
result = runtime.run_with_approval(
messages=[{"role": "user", "content": "Refund order ord_123 for 5000"}],
model="gpt-4.1",
on_approval_required=lambda ctx: {
"resolution": "approved",
"comment": "approved by on-call",
},
)
When BIGHUB returns requires_approval, the adapter pauses execution and calls on_approval_required with the decision context. Return {"resolution": "approved"} to resume execution, or {"resolution": "denied"} to block it.
Run approval callbacks server-side, not in clients, to avoid exposing approval credentials.
Automatic outcome reporting
When outcome_reporting=True (default), the adapter automatically reports:
- Successful execution →
SUCCESSoutcome with tool output - Blocked execution →
BLOCKEDoutcome - Tool errors →
FAILUREoutcome with error details
This closes the learning loop without manual instrumentation. Disable with outcome_reporting=False if you report outcomes manually via the SDK.
Decision memory
When memory_enabled=True (default), the adapter ingests structured events (tool calls, decisions, outcomes) into BIGHUB's decision memory. This enables pattern detection and context-aware recommendations across sessions.
Context manager
with BighubOpenAI(...) as runtime:
runtime.tool("refund_payment", refund_payment)
response = runtime.run(...)
# BIGHUB client is automatically closed
API Reference
BighubOpenAI / AsyncBighubOpenAI
| Method | Description |
|---|---|
tool(name, fn, **kwargs) |
Register a tool (shorthand for register_tool) |
register_tool(name, fn, description, parameters_schema, ...) |
Register a tool with full options |
list_tools() |
List registered tools with OpenAI-compatible schemas |
run(messages, model, instructions, temperature, extra_create_args) |
Run a complete evaluated interaction |
run_stream(messages, model, instructions, temperature, extra_create_args) |
Run with streaming events |
run_with_approval(messages, model, ..., on_approval_required) |
Run with human-in-the-loop approval |
close() |
Close the underlying BIGHUB client |
Links
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bighub_openai-3.1.0.tar.gz.
File metadata
- Download URL: bighub_openai-3.1.0.tar.gz
- Upload date:
- Size: 29.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
97f2888e4523b487c1882198a4ce86f0543418c92bb74cb63809bee63b6a15dc
|
|
| MD5 |
220f5650b1c98be55535d86bf1a9f859
|
|
| BLAKE2b-256 |
d598eb3f2c08baca15a0983de54782e102cff713ee76debfd3eb7a5bc68a1fbd
|
File details
Details for the file bighub_openai-3.1.0-py3-none-any.whl.
File metadata
- Download URL: bighub_openai-3.1.0-py3-none-any.whl
- Upload date:
- Size: 20.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ca65d1226893798034ac853295fef5789a02243a6ceaa18039641a072ce57fa1
|
|
| MD5 |
2568174ab4f21b1e9de0ffd5367cb623
|
|
| BLAKE2b-256 |
aee14f4beeda97b2a83c39f678ad8b8216ade62ae93ffb4797ca0f8666dc2267
|