Skip to main content

Policy adherence code generation for guarding AI agent tools

Project description

📦 AI Agents Policy Adherence

This tool analyzes policy documents and generates deterministic Python code to enforce operational policies when invoking AI agent tools. This work is described in EMNLP 2025 Towards Enforcing Company Policy Adherence in Agentic Workflows.

Business policies (or guidelines) are normally detailed in company documents, and have traditionally been hard-coded into automatic assistant platforms. Contemporary agentic approaches take the "best-effort" strategy, where the policies are appended to the agent's system prompt, an inherently non-deterministic approach, that does not scale effectively. Here we propose a deterministic, predictable and interpretable two-phase solution for agentic policy adherence at the tool-level: guards are executed prior to function invocation and raise alerts in case a tool-related policy deem violated.

This component enforces pre‑tool activation policy constraints, ensuring that agent decisions comply with business rules before modifying system state. This prevents policy violations such as unauthorized tool calls or unsafe parameter values.

Step 1:

This component gets a set of tools and a policy document and generated multiple ToolGuard specifications, known as ToolGuardSpecs. Each specification is attached to a tool, and it declares a precondition that must apply before invoking the tool. The specification has a name, description, list of refernces to the original policy document, a set of declerative compliance_examples, describing test cases that the toolGuard should allow the tool invocation, and violation_examples, where the toolGuard should raise an exception.

The specifications are aimed to be used as input into our next component - described below.

The two components are not concatenated by design. As the geneartion involves a non-deterministic language model, the results need to be reviewed by a human. Hence, the output specification files should be reviewed and optionaly edited. For example, removing a wrong compliance example.

The OpenAPI document should describe agent tools and optionally include read-only tools that might be used to enforce policies. It’s important that each tool has:

  • A proper operation_id matching the tool name
  • A detailed description
  • Clearly defined input parameters and return types
  • Well-documented data models

Step 2:
Uses the output from Step 1 and the OpenAPI spec to generate Python code that enforces each tool’s policies.


🐍 Requirements

  • Python 3.12+

🛠 Installation

  1. Clone the repository:

    uv install toolguard
    
  2. Create a .env file:

    Copy the .env.example to src/.env and fill in your environment variables. Replace AZURE_OPENAI_API_KEY with your actual API key. and add in TOOLGUARD_GENPY_ARGS your API_KEY.

▶️ Usage

PYTHONPATH=src python -m policy_adherence --policy-path <path_to_policy> --oas <path_to_oas> --out-dir <output_directory> [options]

Arguments

Argument Type Description
--policy-path str Path to the policy file. Currently in markdown syntax. Example: /Users/me/airline/wiki.md
--oas str Path to an OpenAPI specification file (JSON/YAML) describing the available tools. The operation_ids should match tool names. Example: /Users/me/airline/openapi.json
--out-dir str Path to an output folder where the generated artifacts will be written. Example: /Users/me/airline/outdir2
--force-step1 flag Force execution of step 1 even if artifacts already exist. Default: False
--run-step2 flag Whether to execute step 2. Use --run-step2 to skip. Default: True
--step1-dir-name str Folder name under the output folder for step 1. Default: Step1
--step2-dir-name str Folder name under the output folder for step 2. Default: Step2
--tools list Optional list of tool names to include. These should be a subset of the OpenAPI operation_ids. Example: --tools create_user delete_user

Example

PYTHONPATH=src python -m policy_adherence \
  --policy-path ./policy/wiki.md \
  --oas ./spec/openapi.json \
  --out-dir ./output \
  --force-step1 \
  --tools create_user delete_user

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

toolguard-0.1.7.tar.gz (54.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

toolguard-0.1.7-py3-none-any.whl (78.0 kB view details)

Uploaded Python 3

File details

Details for the file toolguard-0.1.7.tar.gz.

File metadata

  • Download URL: toolguard-0.1.7.tar.gz
  • Upload date:
  • Size: 54.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for toolguard-0.1.7.tar.gz
Algorithm Hash digest
SHA256 a2f14d2083ef0207f23d8faf93ea846727126887c369a980900205573bfd8cd1
MD5 45c78b681d5799457d8ff2b67200b5d4
BLAKE2b-256 687b3b1232312429bc865fc324e04c79ee51b7e2132ba35439635c024865c53b

See more details on using hashes here.

File details

Details for the file toolguard-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: toolguard-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 78.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for toolguard-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 0a0db50893de98fcb9950273234594a9dc2039cd8eee8872179d434dd1c9d8df
MD5 4620785454e61186d69784b41223edb4
BLAKE2b-256 924bf28f08b99b49bfad45ef12562b42a10ae84a3508bfce7ae68704d2e1eb7f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page