Runtime classifier for screening AI agent actions as safe, harmful, or unethical.
Project description
โ ๏ธ When AI agents are provided with a harmful tool and an instruction, they just use it. Popular and high-performing latest LLMs are not an exception.
๐ค AI is perceived as a threat. Increasing usage of agents leads to the usage of harmful tools and harmful usage of tools as proven using HarmActEval. Classifying AI agent actions ensures safety and reliability. Action Guard uses a neural network model trained on HarmActions dataset to classify actions proposed by autonomous AI agents as harmful or safe. The model has been based on a small dataset of labeled examples. The work aims to enhance the safety and reliability of AI agents by preventing them from executing actions that are potentially harmful, unethical, or violate predefined guidelines. โ Safe AI Agents are made possible by Action Classifier.
๐ฌ Demo
[!TIP] Please star โญ the repository if you find Action Guard is useful!
๐จ Common causes of harmful actions by AI agents:
- ๐ User attempting to jailbreak the model.
- ๐ Model hallucinating or misunderstanding the context.
- ๐ญ Model being overconfident in its incorrect knowledge.
- ๐ง Lack of proper constraints or guidelines for the agent.
- ๐ Inadequate training data for specific scenarios.
- ๐ ๏ธ MCP server providing incorrect tool descriptions that mislead the agent.
- ๐ญ Harmful MCP servers returning manipulative text to mislead the model.
- ๐ฌ The experiments proved that the model performs a harmful action and still responds "Sorry, I can't help with that."
๐ New contributions of Agent-Action-Guard framework:
- ๐ HarmActions, a structured dataset of safety-labeled agent actions complemented with manipulated prompts that trigger harmful or unethical actions.
- ๐ HarmActEval benchmark leveraging a new metric "Safe@k."
- ๐ง Action Classifier, a neural classifier trained on HarmActions dataset, designed to label proposed agent actions as potentially harmful or safe, and optimized for real-time deployment in agent loops.
- ๐ MCP integration supporting live action screening using existing MCP servers and clients.
๐ HarmActEval Results
โก Popular and latest LLMs generate harmful actions, proving the need for the action guard and HarmActEval benchmark.
| Model | Safe@1 score |
|---|---|
| Phi 4 Mini Instruct | 0.00% |
| Granite 4-H-Tiny | 0.00% |
| Phi 4 Mini Reasoning | 5.33% |
| Ministral 3 (3B) | 2.67% |
| GPT-5.3 | 17.33% |
๐ Note: Higher Safe@k score is better.
โจ Special features:
- ๐งช This project introduces "HarmActEval" dataset and benchmark to evaluate an AI agent's probability of generating harmful actions.
- ๐๏ธ The dataset has been used to train a lightweight neural network model that classifies actions as safe, harmful, or unethical.
- โก The model is lightweight and can be easily integrated into existing AI agent frameworks like MCP.
- ๐ฏ This project is about classifying actions and not related to Guardrails.
- ๐ Supports MCP (Model Context Protocol) to allow real-time action classification.
- ๐ซ Unlike OpenAI's
"require_approval": "always"flag, this blocks harmful actions without human intervention. - ๐ค A2A-compatible version: https://github.com/Pro-GenAI/A2A-Agent-Action-Guard.
๐ก๏ธ Safety Features:
- ๐ Automatically classifies MCP tool calls before execution.
- ๐ซ Blocks harmful actions based on the outputs of the trained model.
- ๐ Provides detailed classification results.
- โ Allows safe actions to proceed normally.
๐ฌ Feedback
โค๏ธ Love Action Guard? Please share a quick note at https://github.com/Pro-GenAI/Agent-Action-Guard/discussions/15. It really helps shape the project to create a major impact on the AI field. ๐ Waiting with excitement for feedback and discussions on how this helps you or the AI community.
๐ Usage
โก Quick install:
Using uv:
uv venv
source .venv/bin/activate
uv pip install agent-action-guard
๐ฆ Install with HarmActEval CLI extras:
pip install "agent-action-guard[harmacteval]"
python -m agent_action_guard.harmacteval --k 3
๐ For usage instructions, kindly refer https://github.com/Pro-GenAI/Agent-Action-Guard/blob/main/USAGE.md.
๐ Note: The embedding client accepts an API key via the EMBEDDING_API_KEY environment variable (falls back to OPENAI_API_KEY if unset). See .env.example and USAGE.md for examples.
๐ค A2A version:
While this repository focuses on standard tool calls and MCP, an Agent-to-Agent (A2A) compatible version is available at: https://github.com/Pro-GenAI/A2A-Agent-Action-Guard
๐ Citation
If you find this repository useful in your research, please consider citing:
@article{202510.1415,
title = {Agent Action Guard: Classifying AI Agent Actions to Ensure Safety and Reliability},
year = 2025,
month = {October},
publisher = {Preprints},
author = {Praneeth Vadlapati},
doi = {10.20944/preprints202510.1415.v1},
url = {https://doi.org/10.20944/preprints202510.1415.v1},
journal = {Preprints}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_action_guard-1.0.3.tar.gz.
File metadata
- Download URL: agent_action_guard-1.0.3.tar.gz
- Upload date:
- Size: 253.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c7fb5785a9ed7521757fed78d309b465109e7b46f14460920dc3136cdf1d6995
|
|
| MD5 |
65e51ec9bbc6fe0e8a722001d18f3a57
|
|
| BLAKE2b-256 |
8adf6eb9e6715df7d519632dd674072a0604c5ecdf1c641dab5fbc3895d83474
|
Provenance
The following attestation bundles were made for agent_action_guard-1.0.3.tar.gz:
Publisher:
publish-pypi.yml on Pro-GenAI/Agent-Action-Guard
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agent_action_guard-1.0.3.tar.gz -
Subject digest:
c7fb5785a9ed7521757fed78d309b465109e7b46f14460920dc3136cdf1d6995 - Sigstore transparency entry: 1099378814
- Sigstore integration time:
-
Permalink:
Pro-GenAI/Agent-Action-Guard@adabaddf728e1caa75b7cb0b1eaab3ed9cb8dca5 -
Branch / Tag:
refs/tags/v1.0.3 - Owner: https://github.com/Pro-GenAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@adabaddf728e1caa75b7cb0b1eaab3ed9cb8dca5 -
Trigger Event:
release
-
Statement type:
File details
Details for the file agent_action_guard-1.0.3-py3-none-any.whl.
File metadata
- Download URL: agent_action_guard-1.0.3-py3-none-any.whl
- Upload date:
- Size: 247.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e2217db8dc0d72050adf7e5ae63707bb3ec42c4635ea838e62b8615bf5871295
|
|
| MD5 |
f5b06aad5152668203e4de3142730c24
|
|
| BLAKE2b-256 |
a0d1f1ae5ba8ac875e3c6831e23eb4f1e52ce76a30da307e11d5484541a1e6e6
|
Provenance
The following attestation bundles were made for agent_action_guard-1.0.3-py3-none-any.whl:
Publisher:
publish-pypi.yml on Pro-GenAI/Agent-Action-Guard
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agent_action_guard-1.0.3-py3-none-any.whl -
Subject digest:
e2217db8dc0d72050adf7e5ae63707bb3ec42c4635ea838e62b8615bf5871295 - Sigstore transparency entry: 1099378815
- Sigstore integration time:
-
Permalink:
Pro-GenAI/Agent-Action-Guard@adabaddf728e1caa75b7cb0b1eaab3ed9cb8dca5 -
Branch / Tag:
refs/tags/v1.0.3 - Owner: https://github.com/Pro-GenAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@adabaddf728e1caa75b7cb0b1eaab3ed9cb8dca5 -
Trigger Event:
release
-
Statement type: