Skip to main content

Inline-hook client SDK for FangcunGuard. Lets agents call the FangcunGuard Hook server for input/output safety scanning without changing their LLM URL or API key.

Project description

fangcun-hook-sdk

Inline-hook client SDK for FangcunGuard. Lets your agent call a FangcunGuard Hook server for input/output safety scanning without changing the agent's LLM URL or API key.

agent (your code, your LLM key, your LLM URL)
   │
   ├─ pre-LLM   ──► hook.scan_input(messages)   ──► allow / block / replace / anonymize
   ├─ call your LLM as usual
   ├─ post-LLM  ──► hook.scan_output(content)   ──► allow / block / replace / restore
   ▼
agent receives the final message

Install

pip install fangcun-hook-sdk

# optional: bring the framework you actually use
pip install "fangcun-hook-sdk[openai]"   # raw OpenAI client wrapper
pip install "fangcun-hook-sdk[agents]"   # OpenAI Agents SDK guardrails

Minimal example (raw OpenAI client)

from openai import OpenAI
from fangcun_hook_sdk import HookClient
from fangcun_hook_sdk.adapters.openai_raw import wrap_openai

# Your existing LLM client — unchanged.
llm = OpenAI(api_key="sk-...", base_url="https://api.openai.com/v1")

# Wrap it once. The agent code below sees no difference.
hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")
llm = wrap_openai(llm, hook)

resp = llm.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "My email is john@example.com"}],
)
print(resp.choices[0].message.content)
# PII is anonymized before reaching the LLM and restored on the way back.

Minimal example (OpenAI Agents SDK)

from agents import Agent, Runner
from fangcun_hook_sdk import AsyncHookClient
from fangcun_hook_sdk.adapters.openai_agents import (
    make_input_guardrail, make_output_guardrail,
)

hook = AsyncHookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

agent = Agent(
    name="support-bot",
    instructions="You are a helpful assistant.",
    input_guardrails=[make_input_guardrail(hook)],
    output_guardrails=[make_output_guardrail(hook)],
)

result = await Runner.run(agent, "What is the weather in Tokyo?")

Low-level API

If your framework isn't covered, call the client directly:

from fangcun_hook_sdk import HookClient, Action

hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

scan_in = hook.scan_input(messages=[{"role": "user", "content": "..."}])
if scan_in.action == Action.BLOCK:
    raise RuntimeError(scan_in.message)
elif scan_in.action == Action.ANONYMIZE:
    messages_to_send = scan_in.anonymized_messages
    restore_mapping  = scan_in.restore_mapping
else:
    messages_to_send = original_messages
    restore_mapping  = None

# ... call your LLM with messages_to_send ...

scan_out = hook.scan_output(
    content=llm_reply,
    restore_mapping=restore_mapping,
)
if scan_out.action == Action.BLOCK:
    raise RuntimeError(scan_out.message)
final_text = scan_out.content if scan_out.action == Action.RESTORE else llm_reply

Failure mode

By default the SDK is fail-open: if the hook server is unreachable or returns an error, calls return Action.PASS and the agent keeps working. Pass fail_open=False to surface those errors as exceptions instead.

hook = HookClient(..., fail_open=False)

Talking to a legacy server

If you're pointing at an older FangcunGuard server that only exposes /v1/gateway/process-input and /v1/gateway/process-output, set:

hook = HookClient(..., primary_path="/v1/gateway")

The new server registers both prefixes, so most users don't need this.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fangcun_hook_sdk-0.1.1.tar.gz (47.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fangcun_hook_sdk-0.1.1-py3-none-any.whl (64.5 kB view details)

Uploaded Python 3

File details

Details for the file fangcun_hook_sdk-0.1.1.tar.gz.

File metadata

  • Download URL: fangcun_hook_sdk-0.1.1.tar.gz
  • Upload date:
  • Size: 47.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for fangcun_hook_sdk-0.1.1.tar.gz
Algorithm Hash digest
SHA256 3cee4db9b7a16c8163acf3265acdb0930269356c15369d78eab70cb962265e4d
MD5 f910352e0db1e29610a76554a7abc2de
BLAKE2b-256 b149628eac731be98e90834c37d583addf970db1b1661158dcb2c044af6fdca3

See more details on using hashes here.

File details

Details for the file fangcun_hook_sdk-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for fangcun_hook_sdk-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5350b243c98978ea337faee97940e07fd730dfa6fe14940463019d4bf8cc9db9
MD5 8ad9524315f1fd2d0d1bbfba84ac18eb
BLAKE2b-256 cdd3ac547dd09faded65306816334136302b38817bb7019c5e4a6dda7b711f18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page