Skip to main content

Local-first runtime guardrails for coding agents - stop loops, retry storms, and budget burn with a zero-dependency SDK

Project description

AgentGuard

Your coding agent just started looping through retries and shell calls. AgentGuard stops it before it goes off the rails.

Local-first runtime governance for AI agents. Budget, loops, timeouts, rates — four static guards that stop an agent before it runs away, with a zero-dependency Python SDK. Traces and incident context exposed through MCP when your tooling needs read access.

When tokens cost 12 cents per million, the bottleneck isn't cost. It's control. AgentGuard is the governance layer that keeps agents inside the rails you set — no matter how cheap the tokens get.

PyPI Downloads Python CI Coverage License: MIT OpenSSF Scorecard GitHub stars

pip install agentguard47

Why this wedge

AgentGuard stays focused on coding-agent safety on purpose.

In an April 2026 report, a16z said that 29% of the Fortune 500 and about 19% of the Global 2000 were live paying customers of leading AI startups, with coding described as the dominant enterprise AI use case and support/search next behind it. The report also cited repeated claims of 10-20x productivity gains from AI coding tools. Source: AI Adoption by the Numbers.

That supports the public SDK strategy in this repo:

  • stay narrow on coding-agent runtime safety
  • make the first proof local, cheap, and easy to trust
  • reuse the same runtime patterns for adjacent managed-agent workflows later, without turning the SDK into a generic observability platform

Why runtime safety matters now

Agents are getting more autonomous. The guardrails around them are not keeping up.

  • Unchecked token burn is real. Meta's internal "Claudeonomics" leaderboard tracked 60 trillion tokens consumed by 85,000 employees in 30 days. Some employees left agents running for hours just to climb the rankings. Meta shut the dashboard down days after it leaked. (source)
  • Self-improving agents need guardrails that don't self-improve. Cursor's Bugbot has auto-generated 44,000+ learned rules across 110,000+ repos. When agents write their own rules, the safety layer has to be external and deterministic. Not another model. Not another prompt. (source)
  • Layered agent architectures are the default now. Orchestrators spawn sub-agents that spawn tool calls. Every layer multiplies the blast radius of a stuck loop or a retry storm. You need a guard that runs in-process, at every layer, and kills the run before it compounds.

AgentGuard is that layer. Zero dependencies. No network calls required. Raises an exception and stops the agent mid-run.

Token-metered pricing changes the failure mode

Most model APIs already bill on token-linked usage. That means a runaway agent is not the only budget risk anymore. One oversized turn with a huge context window or a verbose completion can erase the run budget on its own. Runtime budget guards are no longer optional.

AgentGuard's BudgetGuard is built for that reality:

  • cap spend for the whole run, not just call count
  • warn before the limit is gone
  • raise BudgetExceeded on the spike turn itself

Local proof:

python examples/per_token_budget_spike.py
agentguard report per_token_budget_spike_traces.jsonl

That example prices each turn from token counts, then shows a single token-heavy turn blowing through the run budget without any network calls or provider credentials.

Why static guards

Cost control is table stakes. The harder problem is behavior control.

Recent evidence shows that frontier models scheme, deceive, and resist shutdown when given autonomy:

  • Mythos Preview (April 2026) found exploitable vulnerabilities in every major OS and browser during a controlled evaluation. The findings triggered a government emergency meeting. (source)
  • Nature (2026) published peer-reviewed evidence of LLMs disabling their own oversight mechanisms, scheming toward hidden objectives, and leaving concealed notes to future instances of themselves. (source)
  • War games research put GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash into simulated geopolitical conflicts. Every model showed spontaneous deception. None surrendered. Multiple runs escalated to nuclear strikes despite explicit taboo framing. (source)

ML-based safety layers share the same failure mode as the agents they guard: they can be persuaded, prompt-injected, or socially engineered into disabling themselves. A model that schemes can also scheme past a model-based monitor.

AgentGuard's guards are static, deterministic, rule-based checks. They run in-process. They raise exceptions. They cannot be convinced, negotiated with, or talked out of a budget limit. That is the point.

Cost control tells you when to stop spending. Behavior control tells you when to stop the agent. AgentGuard does both.

Verify your install

Before wiring a real agent, validate the local SDK path:

agentguard doctor

doctor makes no network calls. It verifies local trace writing, confirms the SDK can initialize in local-only mode, detects optional integrations already installed in your environment, and prints the smallest correct next-step snippet.

Generate a starter

When you know the stack you want to wire, print the exact starter snippet:

agentguard quickstart --framework raw
agentguard quickstart --framework openai
agentguard quickstart --framework langgraph --json

quickstart is designed for both humans and coding agents. It prints the install command, the smallest credible starter file, and the next commands to run after you validate the SDK locally.

If you want a real file instead of a printed snippet:

agentguard quickstart --framework raw --write
agentguard quickstart --framework openai --write --output agentguard_openai_quickstart.py

--write creates a local starter file you can run immediately. It refuses to overwrite an existing file unless you pass --force.

Coding-Agent Defaults

If you want humans and coding agents to share the same safe local defaults, add a tiny .agentguard.json file to the repo:

{
  "profile": "coding-agent",
  "service": "support-agent",
  "trace_file": ".agentguard/traces.jsonl",
  "budget_usd": 5.0
}

agentguard.init(local_only=True) and agentguard doctor will pick this up automatically. Keep it local and static: no secrets, no API keys, no dashboard settings.

Every agentguard quickstart --framework ... payload also has a matching runnable file under examples/starters/. Those starter files live in the repo for copy-paste onboarding and coding-agent setup; they are not shipped inside the PyPI wheel.

For the repo-first onboarding flow, see docs/guides/coding-agents.md.

For copy-paste setup snippets tailored to Codex, Claude Code, GitHub Copilot, Cursor, and MCP-capable agents, see docs/guides/coding-agent-safety-pack.md.

If you want AgentGuard to generate those repo-local instruction files for you:

agentguard skillpack --write
agentguard skillpack --target claude-code --write --output-dir .

skillpack writes a local .agentguard.json plus agent-specific instruction files for Codex, Claude Code, Copilot, or Cursor. By default it writes into agentguard_skillpack/ so you can review the files before copying them into a real repo.

MCP Server for Coding-Agent Workflows

If your coding agent already uses MCP, AgentGuard also ships a published read-only MCP server that exposes traces, decision events, alerts, usage, costs, and budget health from the AgentGuard read API:

npx -y @agentguard47/mcp-server

The MCP server is intentionally narrow. Use the SDK to enforce safety where the agent runs. Add MCP when you want Codex, Claude Code, Cursor, or another MCP-compatible client to inspect traces and incidents without bespoke glue.

Stateless Harnesses

If one managed-agent session can span multiple disposable harnesses or worker processes, pass a shared session_id to correlate those traces above the single-trace_id level:

from agentguard import JsonlFileSink, Tracer

tracer = Tracer(
    sink=JsonlFileSink(".agentguard/traces.jsonl"),
    service="managed-harness-a",
    session_id="support-session-001",
)

Each tracer instance still creates its own trace_id, but every emitted span and point event also carries the shared session_id. Guide: docs/guides/managed-agent-sessions.md

Try it in 60 seconds

No API keys. No dashboard. No network calls. Just run it:

pip install agentguard47
agentguard demo
AgentGuard offline demo
No API keys. No dashboard. No network calls.

1. BudgetGuard: stopping runaway spend
  warning fired at $0.84
  stopped on call 9: cost $1.08 exceeded $1.00

2. LoopGuard: stopping repeated tool calls
  stopped on repeated tool call: Loop detected ...

3. RetryGuard: stopping retry storms
  stopped retry storm: Retry limit exceeded ...

Local proof complete.

Prefer the example script instead of the CLI? This does the same local demo:

python examples/try_it_now.py

Open In Colab

Quickstart: Stop a Runaway Coding Agent in 4 Lines

from agentguard import Tracer, BudgetGuard, patch_openai

tracer = Tracer(guards=[BudgetGuard(max_cost_usd=5.00, warn_at_pct=0.8)])
patch_openai(tracer)  # auto-tracks every OpenAI call

# Use OpenAI normally - AgentGuard tracks cost and kills the agent at $5

That's it. Every ChatCompletion call is tracked. When accumulated cost hits $4 (80%), your warning fires. At $5, BudgetExceeded is raised and the agent stops.

No config files. No dashboard required. No dependencies.

For a deterministic local proof before wiring a real agent, run:

agentguard doctor
agentguard quickstart --framework raw
agentguard demo

agentguard doctor verifies the install path. agentguard quickstart prints the copy-paste starter for your stack. agentguard demo then proves SDK-only enforcement with a realistic local run. Keep the first integration local and only add hosted pieces after you need retained incidents or team-visible follow-through.

The Problem

Coding agents are cheap to start and expensive to leave unattended:

  • Cost overruns average 340% on autonomous agent tasks (source)
  • A single stuck retry or tool loop can burn through your budget in minutes
  • Existing tracing tools show you what happened after the burn, not stop the run while it is still happening

AgentGuard is built to stop a runaway coding agent mid-run, not just explain the damage later.

AgentGuard LangSmith Langfuse Portkey
Hard budget enforcement Yes No No No
Kill agent mid-run Yes No No No
Loop detection Yes No No No
Cost tracking Yes Yes Yes Yes
Zero dependencies Yes No No No
Self-hosted option Yes No Yes No
Price Free (MIT) $2.50/1k traces $59/mo $49/mo

See also: AgentGuard vs Vercel AI Gateway -- in-process SDK vs gateway proxy, compared across 7 axes; and Where AgentGuard fits in the agent security stack -- identity, MCP governance, sandboxing, and runtime behavior as separate layers.

Guards

Guards are runtime checks that raise exceptions when limits are hit. The agent stops immediately.

Guard What it stops Example
BudgetGuard Dollar/token/call overruns BudgetGuard(max_cost_usd=5.00)
LoopGuard Exact repeated tool calls LoopGuard(max_repeats=3)
FuzzyLoopGuard Similar tool calls, A-B-A-B patterns FuzzyLoopGuard(max_tool_repeats=5)
TimeoutGuard Wall-clock time limits TimeoutGuard(max_seconds=300)
RateLimitGuard Calls-per-minute throttling RateLimitGuard(max_calls_per_minute=60)
RetryGuard Retry storms on the same flaky tool RetryGuard(max_retries=3)
BudgetAwareEscalation Hard turns that should switch to a stronger model BudgetAwareEscalation(..., escalate_on=EscalationSignal.TOKEN_COUNT(threshold=2000))
from agentguard import BudgetGuard, BudgetExceeded

budget = BudgetGuard(
    max_cost_usd=10.00,
    warn_at_pct=0.8,
    on_warning=lambda msg: print(f"WARNING: {msg}"),
)

# In your agent loop:
budget.consume(tokens=1500, calls=1, cost_usd=0.03)
# At 80% → warning callback fires
# At 100% → BudgetExceeded raised, agent stops
from agentguard import RetryGuard, RetryLimitExceeded, Tracer

retry_guard = RetryGuard(max_retries=3)
tracer = Tracer(guards=[retry_guard])

with tracer.trace("agent.run") as span:
    try:
        span.event("tool.retry", data={"tool_name": "search", "attempt": 1})
        span.event("tool.retry", data={"tool_name": "search", "attempt": 2})
        span.event("tool.retry", data={"tool_name": "search", "attempt": 3})
        span.event("tool.retry", data={"tool_name": "search", "attempt": 4})
    except RetryLimitExceeded:
        # Retry storm stopped
        pass
from agentguard import BudgetAwareEscalation, EscalationSignal

guard = BudgetAwareEscalation(
    primary_model="ollama/llama3.1:8b",
    escalate_model="claude-opus-4-6",
    escalate_on=(
        EscalationSignal.TOKEN_COUNT(threshold=2000),
        EscalationSignal.CONFIDENCE_BELOW(threshold=0.45),
    ),
)

model = guard.select_model(token_count=2430, confidence=0.39)

BudgetAwareEscalation gives you an advisor-style pattern without hiding the provider call inside the SDK. AgentGuard decides when the current turn is too hard for the cheap model; your app still chooses how to invoke the stronger model.

Guide: docs/guards/budget-aware-escalation.md

Integrations

LangChain

pip install agentguard47[langchain]
from agentguard import Tracer, BudgetGuard
from agentguard.integrations.langchain import AgentGuardCallbackHandler

tracer = Tracer(guards=[BudgetGuard(max_cost_usd=5.00)])
handler = AgentGuardCallbackHandler(
    tracer=tracer,
    budget_guard=BudgetGuard(max_cost_usd=5.00),
)

# Pass to any LangChain component
llm = ChatOpenAI(callbacks=[handler])

LangGraph

pip install agentguard47[langgraph]
from agentguard.integrations.langgraph import guarded_node

@guarded_node(tracer=tracer, budget_guard=BudgetGuard(max_cost_usd=5.00))
def research_node(state):
    return {"messages": state["messages"] + [result]}

CrewAI

pip install agentguard47[crewai]
from agentguard.integrations.crewai import AgentGuardCrewHandler

handler = AgentGuardCrewHandler(
    tracer=tracer,
    budget_guard=BudgetGuard(max_cost_usd=5.00),
)

agent = Agent(role="researcher", step_callback=handler.step_callback)

OpenAI / Anthropic Auto-Instrumentation

from agentguard import Tracer, BudgetGuard, patch_openai, patch_anthropic

tracer = Tracer(guards=[BudgetGuard(max_cost_usd=5.00)])
patch_openai(tracer)      # auto-tracks all ChatCompletion calls
patch_anthropic(tracer)   # auto-tracks all Messages calls

Multi-Agent Safety

When multiple agents share state, a common failure mode is the reactive loop: Agent A updates shared state, Agent B reacts, Agent A reacts to B's update, and the cycle repeats. Without an explicit termination condition, these loops consume tokens indefinitely without converging on a result.

Anthropic's multi-agent coordination patterns guide calls out this exact risk for shared-state architectures and recommends time budgets and threshold-based stopping. AgentGuard's BudgetGuard and TimeoutGuard are those stopping conditions.

from agentguard import BudgetGuard, BudgetExceeded, TimeoutGuard, TimeoutExceeded

# Shared budget across both agents. When either hits the limit, the loop stops.
budget = BudgetGuard(max_cost_usd=2.00, warn_at_pct=0.8,
                     on_warning=lambda msg: print(f"WARN: {msg}"))
timeout = TimeoutGuard(max_seconds=120)

shared_state = {"revision": 0, "content": ""}

try:
    with timeout:
        while True:
            timeout.check()
            # Agent A: writer
            shared_state["content"] = f"draft v{shared_state['revision']}"
            budget.consume(tokens=500, calls=1, cost_usd=0.01)

            # Agent B: reviewer
            shared_state["revision"] += 1
            budget.consume(tokens=300, calls=1, cost_usd=0.008)
except (BudgetExceeded, TimeoutExceeded) as e:
    print(f"Terminated: {e}")
    print(f"Final state: revision {shared_state['revision']}")

The guards are static and deterministic. No agent can talk its way past a dollar limit or a wall-clock timeout.

Cost Tracking

Built-in pricing for OpenAI, Anthropic, Google, Mistral, and Meta models. Updated monthly.

from agentguard import estimate_cost

# Single call estimate
cost = estimate_cost("gpt-4o", input_tokens=1000, output_tokens=500)
# → $0.00625

# Track across a trace — cost is auto-accumulated per span
with tracer.trace("agent.run") as span:
    span.cost.add("gpt-4o", input_tokens=1200, output_tokens=450)
    span.cost.add("claude-sonnet-4-5-20250929", input_tokens=800, output_tokens=300)
    # cost_usd included in trace end event

Tracing

Full structured tracing with zero dependencies — JSONL output, spans, events, and cost data.

from agentguard import Tracer, JsonlFileSink, BudgetGuard

tracer = Tracer(
    sink=JsonlFileSink("traces.jsonl"),
    guards=[BudgetGuard(max_cost_usd=5.00)],
)

with tracer.trace("agent.run") as span:
    span.event("reasoning", data={"thought": "search docs"})
    with span.span("tool.search", data={"query": "quantum computing"}):
        pass  # your tool logic
    span.cost.add("gpt-4o", input_tokens=1200, output_tokens=450)
$ agentguard report traces.jsonl

AgentGuard report
  Total events: 9
  Spans: 6  Events: 3
  Estimated cost: $0.01
  Savings ledger: exact 800 tokens / $0.0010, estimated 1500 tokens / $0.0075

When a run trips a guard or needs escalation, render a shareable incident report:

agentguard incident traces.jsonl
agentguard incident traces.jsonl --format html > incident.html

The incident report summarizes guard triggers, exact-vs-estimated savings, and the dashboard upgrade path for retained alerts and remote kill switch.

Decision Tracing

Capture agent proposals, human edits, overrides, approvals, and binding outcomes through the normal AgentGuard event path.

from agentguard import JsonlFileSink, Tracer, decision_flow

tracer = Tracer(
    sink=JsonlFileSink(".agentguard/traces.jsonl"),
    service="approval-flow",
)

with tracer.trace("agent.run") as run:
    with decision_flow(
        run,
        workflow_id="deploy-approval",
        object_type="deployment",
        object_id="deploy-042",
        actor_type="agent",
        actor_id="release-bot",
    ) as decision:
        decision.proposed({"action": "deploy", "environment": "staging"})
        decision.edited(
            {"action": "deploy", "environment": "production"},
            actor_type="human",
            actor_id="reviewer-123",
            reason="Customer approved direct rollout",
        )
        decision.approved(actor_type="human", actor_id="reviewer-123")
        decision.bound(
            actor_type="system",
            actor_id="deploy-api",
            binding_state="applied",
            outcome="success",
        )

Every decision event includes a stable schema in event.data:

  • decision_id
  • workflow_id
  • trace_id
  • object_type
  • object_id
  • actor_type
  • actor_id
  • event_type
  • proposal
  • final
  • diff
  • reason
  • comment
  • timestamp
  • binding_state
  • outcome

Guide: docs/guides/decision-tracing.md

For local JSONL traces, you can extract the normalized decision events without writing your own parser:

agentguard decisions .agentguard/traces.jsonl
agentguard decisions .agentguard/traces.jsonl --workflow-id deploy-approval --json

For retained traces exposed through MCP, use the get_trace_decisions tool to pull the same normalized decision payloads from a hosted trace by trace_id.

Evaluation

Assert properties of your traces in tests or CI.

from agentguard import EvalSuite

result = (
    EvalSuite("traces.jsonl")
    .assert_no_loops()
    .assert_budget_under(tokens=50_000)
    .assert_completes_within(seconds=30)
    .assert_total_events_under(500)
    .assert_no_budget_exceeded()
    .assert_no_errors()
    .run()
)
agentguard eval traces.jsonl --ci   # exits non-zero on failure

CI Cost Gates

Fail your CI pipeline if an agent run exceeds a cost budget. No competitor offers this.

# .github/workflows/cost-gate.yml (simplified)
- name: Run agent with budget guard
  run: |
    python3 -c "
    from agentguard import Tracer, BudgetGuard, JsonlFileSink
    tracer = Tracer(
        sink=JsonlFileSink('ci_traces.jsonl'),
        guards=[BudgetGuard(max_cost_usd=5.00)],
    )
    # ... your agent run here ...
    "

- name: Evaluate traces
  uses: bmdhodl/agent47/.github/actions/agentguard-eval@main
  with:
    trace-file: ci_traces.jsonl
    assertions: "no_errors,max_cost:5.00"

Full workflow: docs/ci/cost-gate-workflow.yml

Incident Reports

Turn a trace into a postmortem-style incident summary:

agentguard incident traces.jsonl --format markdown
agentguard incident traces.jsonl --format html > incident.html

Use this when a run hits guard.budget_warning, guard.budget_exceeded, guard.loop_detected, or a fatal error. AgentGuard will summarize the run, separate exact and estimated savings, and suggest the next control-plane step.

Async Support

Full async API mirrors the sync API.

from agentguard import AsyncTracer, BudgetGuard, patch_openai_async

tracer = AsyncTracer(guards=[BudgetGuard(max_cost_usd=5.00)])
patch_openai_async(tracer)

# All async OpenAI calls are now tracked and budget-enforced

Optional Hosted Dashboard

For teams that need retained history, alerts, and remote controls, the SDK can mirror traces to the hosted dashboard:

from agentguard import Tracer, HttpSink, BudgetGuard

tracer = Tracer(
    sink=HttpSink(
        url="https://app.agentguard47.com/api/ingest",
        api_key="ag_...",
        batch_size=20,
        flush_interval=10.0,
        compress=True,
    ),
    guards=[BudgetGuard(max_cost_usd=50.00)],
    metadata={"env": "prod"},
    sampling_rate=0.1,  # 10% of traces
)

Keep the first integration local. Add HttpSink only when you need retained incidents, alerts, or hosted follow-through.

Architecture

Your Agent Code
    │
    ▼
┌─────────────────────────────────────┐
│           Tracer / AsyncTracer       │  ← trace(), span(), event()
│  ┌───────────┐  ┌────────────────┐  │
│  │  Guards    │  │  CostTracker   │  │  ← runtime intervention
│  └───────────┘  └────────────────┘  │
└──────────┬──────────────────────────┘
           │ emit(event)
    ┌──────┼──────────┬───────────┐
    ▼      ▼          ▼           ▼
 JsonlFile  HttpSink  OtelTrace  Stdout
  Sink      (gzip,    Sink       Sink
            retry)

What's in this repo

Directory Description License
sdk/ Python SDK — guards, tracing, evaluation, integrations MIT
mcp-server/ Read-only MCP surface for traces, alerts, usage, costs, and budget health MIT
site/ Landing page MIT

Dashboard is in a separate private repo (agent47-dashboard).

Security

  • Zero runtime dependencies — one package, nothing to audit, no supply chain risk
  • OpenSSF Scorecard — automated security analysis on every push
  • CodeQL scanning — GitHub's semantic code analysis on every PR
  • Bandit security linting — Python-specific security checks in CI

Contributing

See CONTRIBUTING.md for dev setup, test commands, and PR guidelines.

Commercial Support

Need help rolling out coding-agent safety in production? BMD Pat LLC offers:

  • $500 Async Azure Audit -- cost, reliability, and governance review. No meetings. Results in 5 business days.
  • Custom agent guardrails -- production-grade cost controls, compliance tooling, kill switches.

Start a project | See the research

License

MIT (BMD PAT LLC)

Latest Release Notes (1.2.8)

Agent Security Stack Positioning

  • Added a new competitive-positioning doc that places AgentGuard in the runtime behavior and budget layer of the emerging agent security stack, beside identity, MCP governance, and sandboxing layers.
  • Updated the README competitive-doc links so the public repo points to both the gateway comparison and the broader stack-layer framing.

Per-Token Budget Proof

  • Added a new local examples/per_token_budget_spike.py proof that prices turns from token counts and shows BudgetGuard catching a single oversized turn without any API key or network access.
  • Updated README, getting-started docs, and examples docs to frame budget enforcement around token-metered pricing and point users to the new local proof path.

Budget-Aware Escalation Guard

  • Added BudgetAwareEscalation, EscalationSignal, and EscalationRequired so developers can keep a cheaper default model and escalate only hard turns to a stronger model without adding provider-specific SDK dependencies.
  • Added support for token-count, confidence, tool-call-depth, and custom-rule escalation triggers, plus a local example and guide for the Llama-to-Claude advisor-style pattern.

Managed-Agent Session Correlation

  • Added optional session_id support to Tracer, AsyncTracer, and agentguard.init(...) so disposable harnesses can correlate multiple trace streams under one higher-level managed-agent session without changing sink behavior.
  • Added a local managed-session guide plus a runnable example that proves two separate tracer instances can emit distinct trace_id values while sharing one session_id.

Coding-Agent Skill Packs

  • Added agentguard skillpack so developers and coding agents can generate repo-local .agentguard.json defaults plus instruction files for Codex, Claude Code, GitHub Copilot, and Cursor without bespoke copy-paste setup.
  • Updated the coding-agent onboarding docs to prefer the generated local-first skill-pack flow and the quickstart --write verification loop over checked-in example paths.

Supply Chain And Release Prep

  • Replaced unhashed workflow pip install steps with a checked-in, hash-locked CI toolchain requirements file and switched CI, entropy, and publish validation to use that shared lock.
  • Pinned the root and MCP server Dockerfiles to the current node:22-alpine image digest to remove mutable base-image references from the repo's build surfaces.
  • Prepared the GitHub side of PyPI Trusted Publishing by adding the pypi environment and wiring the publish workflow to it, while deliberately keeping token auth in place until the PyPI project owner adds the matching trusted publisher.

Full changelog: CHANGELOG.md

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentguard47-1.2.8.tar.gz (158.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentguard47-1.2.8-py3-none-any.whl (86.3 kB view details)

Uploaded Python 3

File details

Details for the file agentguard47-1.2.8.tar.gz.

File metadata

  • Download URL: agentguard47-1.2.8.tar.gz
  • Upload date:
  • Size: 158.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agentguard47-1.2.8.tar.gz
Algorithm Hash digest
SHA256 afd7c69736d8aa14e1a49a2a02d5a869c67ce42d3f2559d91f4c9648bc92c844
MD5 a25215497bbafcf9cc2785eeb318596f
BLAKE2b-256 dacb6c0a90a57570e2a477c6e798860d3bc96e3c81157764a7217c85d1882fe2

See more details on using hashes here.

File details

Details for the file agentguard47-1.2.8-py3-none-any.whl.

File metadata

  • Download URL: agentguard47-1.2.8-py3-none-any.whl
  • Upload date:
  • Size: 86.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agentguard47-1.2.8-py3-none-any.whl
Algorithm Hash digest
SHA256 e94b09d1f7d58ba2635cbd16a3ec9b5561bcef4df9f056ee339597922c10a255
MD5 44679104e27d0b43d7deb0f6f100eace
BLAKE2b-256 6038ec389ed160601682b3e02ac6df0c4b4765fac4120a23f28c1c10f4bb7a5b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page