Skip to main content

Runtime classifier for screening AI agent actions as safe, harmful, or unethical.

Project description

Agent Action Guard

Agent Action Guard

Framework to block harmful AI agent actions before they cause harm — lightweight, real-time, easy-to-use.

PyPI Website YouTube Medium

PyPI Downloads AI LLMs Python License: CC BY 4.0


🚀 Quick Start

pip install agent-action-guard

🔑 Set EMBEDDING_API_KEY (or OPENAI_API_KEY) in your environment. See .env.example and USAGE.md.

Want to run the evaluation benchmark too?

pip install "agent-action-guard[harmactionseval]"
python -m agent_action_guard.harmactionseval

❓ Why Action Guard?

HarmActionsEval benchmark proved that AI agents with harmful tools will use them — even today's most capable LLMs. 80% of the LLMs tested executed actions at the first attempt for over 95% of the harmful prompts.

Model SafeActions@1
Claude Haiku 4.5 0.00%
Phi 4 Mini Instruct 0.00%
Granite 4-H-Tiny 0.00%
GPT-5.4 Mini 0.71%
Gemini 3.1 Flash Lite 0.71%
Ministral 3 (3B) 2.13%
Claude Sonnet 4.6 2.84%
Phi 4 Mini Reasoning 2.84%
GPT-5.3 12.77%
Qwen3.5-397b-a17b 23.40%
Average 4.54%

These models often still respond "Sorry, I can't help with that" while executing the harmful action anyway.

Action Guard sits between the agent and its tools, blocking unsafe calls before they run — no human in the loop required.

Iceberg


⚙️ How It Works

  1. Agent proposes a tool call
  2. Action Guard classifies it using a lightweight neural network trained on the HarmActions dataset
  3. Harmful calls are blocked; safe calls proceed normally
Workflow Demo

🆕 Contributions:

  • 📊 HarmActions — safety-labeled agent action dataset with manipulated prompts
  • 📏 HarmActionsEval — benchmark with the SafeActions@k metric
  • 🧠 Action Guard — real-time neural classifier optimized for agent loops
    • 🏋️ Trained on HarmActions
    • ✅ Classifies every tool call before execution
    • 🚫 Blocks harmful and unethical actions automatically
    • ⚡ Lightweight for real-time use

💬 Enjoyed it? Share your opinion.

Share a quick note in Discussions — it directly shapes the project's direction and helps the AI safety community. 🙌 Waiting with excitement for feedback and discussions on how this helps you or the AI community.

⭐ Star the repo if Action Guard is useful to you — it really does help!


📝 Citation

@article{202510.1415,
  title   = {{Agent Action Guard: Classifying AI Agent Actions to Ensure Safety and Reliability}},
  year    = 2025,
  month   = {October},
  publisher = {Preprints},
  author  = {Praneeth Vadlapati},
  doi     = {10.20944/preprints202510.1415.v2},
  url     = {https://www.preprints.org/manuscript/202510.1415},
  journal = {Preprints}
}

📄 License

Licensed under CC BY 4.0. If you prefer not to provide attribution, send a brief acknowledgment to praneeth.vad@gmail.com with the details of your usage and the potential impact on your project.

Project banner


Pro-GenAI
Projects for Next-Gen AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_action_guard-1.1.1.tar.gz (39.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_action_guard-1.1.1-py3-none-any.whl (36.5 kB view details)

Uploaded Python 3

File details

Details for the file agent_action_guard-1.1.1.tar.gz.

File metadata

  • Download URL: agent_action_guard-1.1.1.tar.gz
  • Upload date:
  • Size: 39.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agent_action_guard-1.1.1.tar.gz
Algorithm Hash digest
SHA256 d80b525d809553b456853bb306f4af48a885841263e4eff1e50d00da651ff6d2
MD5 e7a762f8bc1c1a9f9d18011f421b469b
BLAKE2b-256 ec675ed095e7d473032334e8028388e60b97114d5d7620542d5b822aca92c53c

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_action_guard-1.1.1.tar.gz:

Publisher: publish-pypi.yml on Pro-GenAI/Agent-Action-Guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agent_action_guard-1.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_action_guard-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9d755e63dd19cb6be44636023181f484850ac7c3024265a4d86391dd57d29408
MD5 3b16cc48592270f8ab6edfd02777b273
BLAKE2b-256 3ad7487cb068c3a381e95fbf79189c10d525e50739c0f34d0fbbb35d7fbff5ea

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_action_guard-1.1.1-py3-none-any.whl:

Publisher: publish-pypi.yml on Pro-GenAI/Agent-Action-Guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page