Skip to main content

Inline-hook client SDK for FangcunGuard. Lets agents call the FangcunGuard Hook server for input/output safety scanning without changing their LLM URL or API key.

Project description

fangcun-hook-sdk

Inline-hook client SDK for FangcunGuard. Lets your agent call a FangcunGuard Hook server for input/output safety scanning without changing the agent's LLM URL or API key.

agent (your code, your LLM key, your LLM URL)
   │
   ├─ pre-LLM   ──► hook.scan_input(messages)   ──► allow / block / replace / anonymize
   ├─ call your LLM as usual
   ├─ post-LLM  ──► hook.scan_output(content)   ──► allow / block / replace / restore
   ▼
agent receives the final message

Install

pip install fangcun-hook-sdk

# optional: bring the framework you actually use
pip install "fangcun-hook-sdk[openai]"   # raw OpenAI client wrapper
pip install "fangcun-hook-sdk[agents]"   # OpenAI Agents SDK guardrails

Minimal example (raw OpenAI client)

from openai import OpenAI
from fangcun_hook_sdk import HookClient
from fangcun_hook_sdk.adapters.openai_raw import wrap_openai

# Your existing LLM client — unchanged.
llm = OpenAI(api_key="sk-...", base_url="https://api.openai.com/v1")

# Wrap it once. The agent code below sees no difference.
hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")
llm = wrap_openai(llm, hook)

resp = llm.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "My email is john@example.com"}],
)
print(resp.choices[0].message.content)
# PII is anonymized before reaching the LLM and restored on the way back.

Minimal example (OpenAI Agents SDK)

from agents import Agent, Runner
from fangcun_hook_sdk import AsyncHookClient
from fangcun_hook_sdk.adapters.openai_agents import (
    make_input_guardrail, make_output_guardrail,
)

hook = AsyncHookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

agent = Agent(
    name="support-bot",
    instructions="You are a helpful assistant.",
    input_guardrails=[make_input_guardrail(hook)],
    output_guardrails=[make_output_guardrail(hook)],
)

result = await Runner.run(agent, "What is the weather in Tokyo?")

Low-level API

If your framework isn't covered, call the client directly:

from fangcun_hook_sdk import HookClient, Action

hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

scan_in = hook.scan_input(messages=[{"role": "user", "content": "..."}])
if scan_in.action == Action.BLOCK:
    raise RuntimeError(scan_in.message)
elif scan_in.action == Action.ANONYMIZE:
    messages_to_send = scan_in.anonymized_messages
    restore_mapping  = scan_in.restore_mapping
else:
    messages_to_send = original_messages
    restore_mapping  = None

# ... call your LLM with messages_to_send ...

scan_out = hook.scan_output(
    content=llm_reply,
    restore_mapping=restore_mapping,
)
if scan_out.action == Action.BLOCK:
    raise RuntimeError(scan_out.message)
final_text = scan_out.content if scan_out.action == Action.RESTORE else llm_reply

Failure mode

By default the SDK is fail-open: if the hook server is unreachable or returns an error, calls return Action.PASS and the agent keeps working. Pass fail_open=False to surface those errors as exceptions instead.

hook = HookClient(..., fail_open=False)

Talking to a legacy server

If you're pointing at an older FangcunGuard server that only exposes /v1/gateway/process-input and /v1/gateway/process-output, set:

hook = HookClient(..., primary_path="/v1/gateway")

The new server registers both prefixes, so most users don't need this.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fangcun_hook_sdk-0.1.0.tar.gz (47.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fangcun_hook_sdk-0.1.0-py3-none-any.whl (64.1 kB view details)

Uploaded Python 3

File details

Details for the file fangcun_hook_sdk-0.1.0.tar.gz.

File metadata

  • Download URL: fangcun_hook_sdk-0.1.0.tar.gz
  • Upload date:
  • Size: 47.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for fangcun_hook_sdk-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e291dd1e0bff5c511edfa454202ce20ddee6aae0d508a153d9b724eb85db98eb
MD5 9da2b3752caeaef0b4419f4b856913a7
BLAKE2b-256 b93fbac88b73f53c297392e03c1d9ae91aaa174903e4797119a95b38abaa2688

See more details on using hashes here.

File details

Details for the file fangcun_hook_sdk-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for fangcun_hook_sdk-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 151b94c5698764cd423a65dc6bed098fb32d9756ffef39587e1e04e941b2f8c8
MD5 22908e4e3f44b212c892d19b7dc98f3f
BLAKE2b-256 b27287eca60488e3443bbeb87128063ffc5bdf975185a0440b3855e5606265aa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page