Skip to main content

Inline-hook client SDK for FangcunGuard. Lets agents call the FangcunGuard Hook server for input/output safety scanning without changing their LLM URL or API key.

Project description

fangcun-hook-sdk

Inline-hook client SDK for FangcunGuard. Lets your agent call a FangcunGuard Hook server for input/output safety scanning without changing the agent's LLM URL or API key.

agent (your code, your LLM key, your LLM URL)
   │
   ├─ pre-LLM   ──► hook.scan_input(messages)   ──► allow / block / replace / anonymize
   ├─ call your LLM as usual
   ├─ post-LLM  ──► hook.scan_output(content)   ──► allow / block / replace / restore
   ▼
agent receives the final message

Install

pip install fangcun-hook-sdk

# optional: bring the framework you actually use
pip install "fangcun-hook-sdk[openai]"   # raw OpenAI client wrapper
pip install "fangcun-hook-sdk[agents]"   # OpenAI Agents SDK guardrails

Minimal example (raw OpenAI client)

from openai import OpenAI
from fangcun_hook_sdk import HookClient
from fangcun_hook_sdk.adapters.openai_raw import wrap_openai

# Your existing LLM client — unchanged.
llm = OpenAI(api_key="sk-...", base_url="https://api.openai.com/v1")

# Wrap it once. The agent code below sees no difference.
hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")
llm = wrap_openai(llm, hook)

resp = llm.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "My email is john@example.com"}],
)
print(resp.choices[0].message.content)
# PII is anonymized before reaching the LLM and restored on the way back.

Minimal example (OpenAI Agents SDK)

from agents import Agent, Runner
from fangcun_hook_sdk import AsyncHookClient
from fangcun_hook_sdk.adapters.openai_agents import (
    make_input_guardrail, make_output_guardrail,
)

hook = AsyncHookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

agent = Agent(
    name="support-bot",
    instructions="You are a helpful assistant.",
    input_guardrails=[make_input_guardrail(hook)],
    output_guardrails=[make_output_guardrail(hook)],
)

result = await Runner.run(agent, "What is the weather in Tokyo?")

Low-level API

If your framework isn't covered, call the client directly:

from fangcun_hook_sdk import HookClient, Action

hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

scan_in = hook.scan_input(messages=[{"role": "user", "content": "..."}])
if scan_in.action == Action.BLOCK:
    raise RuntimeError(scan_in.message)
elif scan_in.action == Action.ANONYMIZE:
    messages_to_send = scan_in.anonymized_messages
    restore_mapping  = scan_in.restore_mapping
else:
    messages_to_send = original_messages
    restore_mapping  = None

# ... call your LLM with messages_to_send ...

scan_out = hook.scan_output(
    content=llm_reply,
    restore_mapping=restore_mapping,
)
if scan_out.action == Action.BLOCK:
    raise RuntimeError(scan_out.message)
final_text = scan_out.content if scan_out.action == Action.RESTORE else llm_reply

Failure mode

By default the SDK is fail-open: if the hook server is unreachable or returns an error, calls return Action.PASS and the agent keeps working. Pass fail_open=False to surface those errors as exceptions instead.

hook = HookClient(..., fail_open=False)

Talking to a legacy server

If you're pointing at an older FangcunGuard server that only exposes /v1/gateway/process-input and /v1/gateway/process-output, set:

hook = HookClient(..., primary_path="/v1/gateway")

The new server registers both prefixes, so most users don't need this.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fangcun_hook_sdk-0.1.2.tar.gz (58.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fangcun_hook_sdk-0.1.2-py3-none-any.whl (80.8 kB view details)

Uploaded Python 3

File details

Details for the file fangcun_hook_sdk-0.1.2.tar.gz.

File metadata

  • Download URL: fangcun_hook_sdk-0.1.2.tar.gz
  • Upload date:
  • Size: 58.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for fangcun_hook_sdk-0.1.2.tar.gz
Algorithm Hash digest
SHA256 7c86211512a96d7aad4aba247d382aa31a009e1e43ad29703d5f7627733cebee
MD5 0ed90692d010e1d92a0cc08f78082eca
BLAKE2b-256 749414d8616d666938a5e54fbc5afb55b5d26d367011ef28067ac33a9ab505b2

See more details on using hashes here.

File details

Details for the file fangcun_hook_sdk-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for fangcun_hook_sdk-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 779f7d51a832868a20b0217e7ec572d4507537ce7bc5ac47282e31ab3a6dbf84
MD5 876432ef450137ad0de0a80e4ddd07c2
BLAKE2b-256 41feeadf2c2e8483ec01c0f7622050ebc4d41978a9810aa70b7c69fd7aa8bb2b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page