Skip to main content

Runtime classifier for screening AI agent actions as safe, harmful, or unethical.

Project description

Agent Action Guard

Agent Action Guard

Framework to block harmful AI agent actions before they cause harm — lightweight, real-time, easy-to-use.

PyPI Website YouTube Medium

PyPI Downloads AI LLMs Python License: CC BY 4.0


🚀 Quick Start

pip install agent-action-guard

🔑 Set EMBEDDING_API_KEY (or OPENAI_API_KEY) in your environment. See .env.example and USAGE.md.

Want to run the evaluation benchmark too?

pip install "agent-action-guard[harmactionseval]"
python -m agent_action_guard.harmactionseval

❓ Why Action Guard?

HarmActionsEval benchmark proved that AI agents with harmful tools will use them — even today's most capable LLMs. 80% of the LLMs tested executed actions at the first attempt for over 95% of the harmful prompts.

Model SafeActions@1
Claude Haiku 4.5 0.00%
Phi 4 Mini Instruct 0.00%
Granite 4-H-Tiny 0.00%
GPT-5.4 Mini 0.71%
Gemini 3.1 Flash Lite 0.71%
Grok 4.20 Non Reasoning 2.13%
Ministral 3 (3B) 2.13%
Claude Sonnet 4.6 2.84%
Phi 4 Mini Reasoning 2.84%
GPT-5.3 12.77%
Qwen3.5-397b-a17b 23.40%
Average 4.54%

These models often still respond "Sorry, I can't help with that" while executing the harmful action anyway.

Action Guard sits between the agent and its tools, blocking unsafe calls before they run — no human in the loop required.

Iceberg


⚙️ How It Works

  1. Agent proposes a tool call
  2. Action Guard classifies it using a lightweight neural network trained on the HarmActions dataset
  3. Harmful calls are blocked; safe calls proceed normally
Workflow Demo

🆕 Contributions:

  • 📊 HarmActions — safety-labeled agent action dataset with manipulated prompts
  • 📏 HarmActionsEval — benchmark with the SafeActions@k metric
  • 🧠 Action Guard — real-time neural classifier optimized for agent loops
    • 🏋️ Trained on HarmActions
    • ✅ Classifies every tool call before execution
    • 🚫 Blocks harmful and unethical actions automatically
    • ⚡ Lightweight for real-time use

💬 Enjoyed it? Share your opinion.

Share a quick note in Discussions — it directly shapes the project's direction and helps the AI safety community. 🙌 Waiting with excitement for feedback and discussions on how this helps you or the AI community.

⭐ Star the repo if Action Guard is useful to you — it really does help!


📝 Citation

@article{202510.1415,
  title   = {{Agent Action Guard: Classifying AI Agent Actions to Ensure Safety and Reliability}},
  year    = 2025,
  month   = {October},
  publisher = {Preprints},
  author  = {Praneeth Vadlapati},
  doi     = {10.20944/preprints202510.1415.v2},
  url     = {https://www.preprints.org/manuscript/202510.1415},
  journal = {Preprints}
}

📄 License

Licensed under CC BY 4.0. If you prefer not to provide attribution, send a brief acknowledgment to praneeth.vad@gmail.com with the details of your usage and the potential impact on your project.

Project banner


Pro-GenAI
Projects for Next-Gen AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_action_guard-1.1.2.tar.gz (39.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_action_guard-1.1.2-py3-none-any.whl (36.6 kB view details)

Uploaded Python 3

File details

Details for the file agent_action_guard-1.1.2.tar.gz.

File metadata

  • Download URL: agent_action_guard-1.1.2.tar.gz
  • Upload date:
  • Size: 39.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agent_action_guard-1.1.2.tar.gz
Algorithm Hash digest
SHA256 b876eda53f57c2df0f76c5af391596da488fe1c59e903c57db16b1a8a367b1e5
MD5 fa2c1783f7c31efcace87621468c3277
BLAKE2b-256 65d5fe137beb1507bd9f5d765c1a93d77a571e2dda44f642ff864c829307e6f8

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_action_guard-1.1.2.tar.gz:

Publisher: publish-pypi.yml on Pro-GenAI/Agent-Action-Guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agent_action_guard-1.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_action_guard-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d2f353330078d5c9c33fdbca8a0ddfe8e2a0a99ad40662c20cd61c9da2e15d09
MD5 aaf33213e9fbdddc22ebcba97ea37a01
BLAKE2b-256 4f68d5b1e5f4b7430023e0122886b5ae99c2e22599b27edda46a59de82aa20a3

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_action_guard-1.1.2-py3-none-any.whl:

Publisher: publish-pypi.yml on Pro-GenAI/Agent-Action-Guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page