Skip to main content

OpenAI adapter for decision learning for AI agent actions with BIGHUB.

Project description

bighub-openai

OpenAI adapter for decision learning on tool calls.

bighub-openai connects the OpenAI Responses API to BIGHUB so tool calls can be evaluated before execution and learned from after execution.

OpenAI Responses API  ->  bighub-openai  ->  BIGHUB
tool call             ->  evaluate       ->  execute / block / approval
real outcome          ->  report         ->  learn from similar cases

Install

pip install bighub-openai

Requires Python 3.9+.

Dependencies:

  • bighub>=3.0.0,<4.0.0
  • openai>=2.0.0,<3.0.0

Quickstart

import os
from bighub_openai import BighubOpenAI

def refund_payment(order_id: str, amount: float) -> dict:
    return {"ok": True, "order_id": order_id, "amount": amount}

guard = BighubOpenAI(
    openai_api_key=os.getenv("OPENAI_API_KEY"),
    bighub_api_key=os.getenv("BIGHUB_API_KEY"),
    actor="AI_AGENT_001",
    domain="customer_transactions",
)

guard.tool(
    "refund_payment",
    refund_payment,
    value_from_args=lambda a: a["amount"],
)

response = guard.run(
    messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
    model="gpt-4.1",
)

print(response["execution"]["last"]["status"])
# executed | blocked | approval_required

guard.tool(...) auto-generates a strict JSON schema from the Python function signature. Use parameters_schema=... only when you need custom constraints.


How It Works

For every tool call, the adapter follows the same loop:

  1. The model proposes a tool call
  2. The adapter captures action + arguments + actor + domain
  3. BIGHUB evaluates the action in context
  4. A decision is returned (allowed, blocked, requires_approval)
  5. If allowed, the tool executes
  6. The action can later be linked to its real outcome
  7. Outcome feedback means future similar tool calls are judged with more experience

Response Shape

{
  "llm_response": {...},
  "execution": {
    "events": [...],
    "last": {
      "tool": "refund_payment",
      "status": "executed",   # executed | blocked | approval_required
      "decision": {
        "allowed": True,
        "result": "allowed",
        "reason": "Matched positive outcomes from similar refund decisions",
        "request_id": "act_abc123",
        "requires_approval": False,
        "risk_score": 0.21
      }
    }
  }
}

The exact decision payload can include additional backend fields. In most integrations, the key signals are:

  • whether execution is allowed now
  • whether human approval is required
  • how the action was judged from past experience

Decision vs execution naming:

  • Decision result: allowed / blocked / requires_approval
  • Execution status: executed / blocked / approval_required

Streaming

for event in guard.run_stream(
    messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
    model="gpt-4.1",
):
    if event["type"] == "llm_delta":
        print(event["delta"], end="")
    elif event["type"] == "execution_event":
        print("\n[decision]", event["event"]["tool"], event["event"]["status"])
    elif event["type"] == "final_response":
        print("\nDone:", event["response"]["output_text"])
Event type Description
llm_delta Incremental text token
llm_text_done Complete text segment
execution_event Tool decision or execution result
final_response Final payload, same shape as run()
response_done Response finished
response_failed Response error

Async

from bighub_openai import AsyncBighubOpenAI

guard = AsyncBighubOpenAI(
    openai_api_key=os.getenv("OPENAI_API_KEY"),
    bighub_api_key=os.getenv("BIGHUB_API_KEY"),
    actor="AI_AGENT_001",
    domain="customer_transactions",
)

response = await guard.run(
    messages=[{"role": "user", "content": "Refund order ord_123 for 199.99"}],
    model="gpt-4.1",
)

Human-in-the-Loop Approvals

result = guard.run_with_approval(
    messages=[{"role": "user", "content": "Refund order ord_123 for 5000"}],
    model="gpt-4.1",
    on_approval_required=lambda ctx: {
        "resolution": "approved",
        "comment": "approved by on-call",
    },
)

Run approval callbacks server-side, not in clients, to avoid exposing approval credentials.


Links


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bighub_openai-3.0.0.tar.gz (22.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bighub_openai-3.0.0-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file bighub_openai-3.0.0.tar.gz.

File metadata

  • Download URL: bighub_openai-3.0.0.tar.gz
  • Upload date:
  • Size: 22.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for bighub_openai-3.0.0.tar.gz
Algorithm Hash digest
SHA256 3c95a89f89507717a13fd5ad3fd1f8d10de4525e8036738b1a4fa4d4665bd258
MD5 5b3a4b9a33df48340008f933748109f7
BLAKE2b-256 84a5d7256c1dba640eb32fe8fda13caf183dcd81f7ce9ae0c3df7d7d5c0ed238

See more details on using hashes here.

File details

Details for the file bighub_openai-3.0.0-py3-none-any.whl.

File metadata

  • Download URL: bighub_openai-3.0.0-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for bighub_openai-3.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d27601defbd0b89b4879e185c8a975c73b72873b848df14f94c4e77d3587786a
MD5 fe669dddde91d1df840e4df977e6b4c4
BLAKE2b-256 d2df1ea8a747524bfbc581be316adc720ede3613ea9b08587c6d7b595413b07c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page