Skip to main content

AgentLoop SDK — middleware that turns human corrections into searchable memory for your agent.

Project description

agentloop

Python SDK for AgentLoop — middleware that turns human corrections into searchable memory for your agent.

  • Python 3.9+, sync and async
  • One runtime dependency (httpx), nothing else
  • Graceful by default — network blips don't break your agent
  • Type hints throughout, full dataclass response shapes

Install

pip install agentloop-py

Quick start (sync)

import os
from openai import OpenAI
from agentloop import AgentLoop

loop = AgentLoop(api_key=os.environ["AGENTLOOP_API_KEY"])
openai = OpenAI()

def ask(question: str, user_id: str) -> str:
    # 1. Before calling the LLM, pull relevant corrections.
    memories = loop.search(question, user_id=user_id, limit=3)

    # 2. Inject them into your system prompt.
    facts = "\n".join(f"- {m.fact}" for m in memories) if memories else ""
    system = "You are a helpful assistant."
    if facts:
        system += f"\n\nTrusted facts from past corrections:\n{facts}"

    # 3. Call the LLM.
    resp = openai.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": system},
            {"role": "user", "content": question},
        ],
    )
    answer = resp.choices[0].message.content or ""

    # 4. Log the turn for review.
    loop.log_turn(question, answer, user_id=user_id)

    return answer

That's the whole integration — one call before the LLM, one after. For a zero-setup drop-in wrapper, install agentloop-py-openai or agentloop-py-anthropic instead.

Quick start (async)

from agentloop.aio import AsyncAgentLoop

async def ask(question: str, user_id: str) -> str:
    async with AsyncAgentLoop(api_key=...) as loop:
        memories = await loop.search(question, user_id=user_id)
        # ... LLM call ...
        await loop.log_turn(question, answer, user_id=user_id)

Use AsyncAgentLoop from agentloop.aio in asyncio code (FastAPI, aiohttp, etc.). Same API, all methods are coroutines except feedback_url() which stays sync — no network call, just HMAC.

Configuration

loop = AgentLoop(
    api_key="ak_...",                           # required
    base_url="https://...",                     # default: https://api.getagentloop.io
    timeout_s=10.0,                             # per-request timeout
    feedback_signing_secret="...",              # only needed for feedback_url()
    throw_on_error=False,                       # see "Graceful failures"
    http_client=my_httpx_client,                # inject custom httpx.Client
)

Base URL resolution

The base_url argument follows this priority order:

  1. Explicit base_url passed to the constructor (highest)
  2. AGENTLOOP_BASE_URL environment variable
  3. Hardcoded hosted-backend URL (fallback)

This lets you point the SDK at a local dev server, a self-hosted deployment, or a future gateway without code changes — just set AGENTLOOP_BASE_URL in the environment.

export AGENTLOOP_BASE_URL=http://localhost:8080
python your_app.py  # now points at a local dev backend instead of api.getagentloop.io

Four methods

search(query, *, user_id=None, limit=3, tags=None) → list[Memory]

Retrieve corrections before your LLM call. Returns [] on failure (unless throw_on_error=True).

memories = loop.search("pix limit at night", user_id="u_123", limit=5)
for m in memories:
    print(m.fact, m.score, m.tags)

log_turn(question, agent_response, *, user_id=None, session_id=None, signals=None, metadata=None) → LogTurnResponse

Queue a turn for review. Returns a default response on failure (unless throw_on_error=True).

result = loop.log_turn(
    question="What's the Pix limit?",
    agent_response="R$5,000",
    user_id="u_123",
    signals={"thumbs_down": True},
    metadata={"latency_ms": 240, "model": "gpt-4o-mini"},
)
if result.was_duplicate:
    # Backend deduplicated this turn against an existing pending one.
    # The returned turn_id points to the merged doc.
    ...

annotate(*, question, agent_response, correction, rating, ...) → AnnotateResponse

Create an annotation directly (bypass the review queue). Always throws on failure — silent degradation would hide reviewer work.

result = loop.annotate(
    question="What's the Pix limit at night?",
    agent_response="R$5,000",
    correction="Pix limit between 8pm and 6am is R$1,000.",
    rating="incorrect",
    root_cause="context",
    tags=["pix", "limits"],
    reviewer="maria@luma.com.br",
)

feedback_url(question, agent_response, *, user_id="", session_id="") → str

Generate an HMAC-signed URL for the embedded feedback widget. Requires feedback_signing_secret. Sync on both clients.

url = loop.feedback_url(question, answer, user_id="u_123")

Signatures are byte-identical to the JavaScript SDK, so URLs signed in Python validate on backends that also accept JS-signed URLs.

Graceful failures

By default, search() and log_turn() return empty/default values on failure. Agent-loop calls sit on the critical path of your agent's response; a blip shouldn't turn into a 500.

# Hard failures
from agentloop import AgentLoopError

loop = AgentLoop(api_key=..., throw_on_error=True)
try:
    memories = loop.search("...")
except AgentLoopError as e:
    print(e.status, e.body)

annotate() always raises regardless of this flag.

Context managers

Both clients support context management for clean resource cleanup:

with AgentLoop(api_key=...) as loop:
    loop.search("...")
# httpx.Client is closed on exit

async with AsyncAgentLoop(api_key=...) as loop:
    await loop.search("...")
# httpx.AsyncClient is closed on exit

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentloop_py-0.2.0.tar.gz (12.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentloop_py-0.2.0-py3-none-any.whl (12.8 kB view details)

Uploaded Python 3

File details

Details for the file agentloop_py-0.2.0.tar.gz.

File metadata

  • Download URL: agentloop_py-0.2.0.tar.gz
  • Upload date:
  • Size: 12.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for agentloop_py-0.2.0.tar.gz
Algorithm Hash digest
SHA256 73d1ad1f599d747bc336dd6e21370be3eda852788ff199618cd45667bd6b2cba
MD5 9b7f7718e5ae00c45b91722b06d4b2e7
BLAKE2b-256 55fffb528fb9f69ba26cd736687384abae9bf4c10d0e727dce90f112f28942e2

See more details on using hashes here.

File details

Details for the file agentloop_py-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: agentloop_py-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 12.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for agentloop_py-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2b6ebf1929cf0908075f75c24e1dc796f4676afa66a536d2b0d0083b055cb291
MD5 99b77bc9ec3f7f895652fbe1bf3670cd
BLAKE2b-256 a37888bf1b0e1d0b22dde739903442428421b50f82f2f153fc5676dd25c70df2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page