Skip to main content

Build, debug, evaluate, and operate AI agents. The only SDK with fork-and-rerun Agent Replay.

Project description

FastAIAgent SDK

Build, debug, evaluate, and operate AI agents. The only SDK with Agent Replay — fork-and-rerun debugging for AI agents.

Works standalone or connected to the FastAIAgent Platform for visual editing, production monitoring, and team collaboration.

PyPI License Tests Python


Quickstart

from fastaiagent import Agent, LLMClient

# Create an LLM client
llm = LLMClient(provider="openai", model="gpt-4o")

# Create an agent
agent = Agent(
    name="my-agent",
    system_prompt="You are a helpful assistant.",
    llm=llm,
)

# Run it
result = agent.run("What is the capital of France?")
print(result.output)
print(result.trace_id)  # every run is traced — use this ID for replay/debugging

Debug a failing agent in 30 seconds

from fastaiagent.trace import Replay

# Load a trace from a production failure
replay = Replay.load("trace_abc123")

# Step through to find the problem
replay.step_through()
# Step 3: LLM hallucinated the refund policy ← found it

# Fork at the failing step, fix, rerun
forked = replay.fork_at(step=3)
forked.modify_prompt("Always cite the exact policy section...")
result = forked.rerun()

No other SDK can do this.

Evaluate agents systematically

from fastaiagent.eval import evaluate

results = evaluate(
    agent_fn=my_agent.run,
    dataset="test_cases.jsonl",
    scorers=["correctness", "relevance"]
)
print(results.summary())
# correctness: 92% | relevance: 88%

Trace any agent — yours or LangChain/CrewAI

import fastaiagent
fastaiagent.integrations.langchain.enable()

# Your existing LangChain agent, now with full tracing
result = langchain_agent.invoke({"input": "..."})
# → Traces stored locally or pushed to FastAIAgent Platform

Build agents with guardrails and cyclic workflows

from fastaiagent import Agent, Chain, LLMClient, Guardrail
from fastaiagent.guardrail import no_pii, json_valid

agent = Agent(
    name="support-bot",
    system_prompt="You are a helpful support agent...",
    llm=LLMClient(provider="openai", model="gpt-4o"),
    tools=[search_tool, refund_tool],
    guardrails=[no_pii(), json_valid()]
)

# Chains with loops (retry until quality is good enough)
chain = Chain("support-pipeline", state_schema=SupportState)
chain.add_node("research", agent=researcher)
chain.add_node("evaluate", agent=evaluator)
chain.add_node("respond", agent=responder)
chain.connect("research", "evaluate")
chain.connect("evaluate", "research", max_iterations=3, exit_condition="quality >= 0.8")
chain.connect("evaluate", "respond", condition="quality >= 0.8")

result = chain.execute({"message": "My order is late"}, trace=True)

Multi-agent teams with context

from fastaiagent import Agent, LLMClient, RunContext, Supervisor, Worker, tool

@tool(name="get_tickets")
def get_tickets(ctx: RunContext[AppState], status: str) -> str:
    """Get support tickets for the current user."""
    return ctx.state.db.query("tickets", user_id=ctx.state.user_id, status=status)

support = Agent(name="support", llm=llm, tools=[get_tickets], system_prompt="Handle tickets.")
billing = Agent(name="billing", llm=llm, tools=[get_billing], system_prompt="Handle billing.")

supervisor = Supervisor(
    name="customer-service",
    llm=LLMClient(provider="openai", model="gpt-4o"),
    workers=[
        Worker(agent=support, role="support", description="Manages tickets"),
        Worker(agent=billing, role="billing", description="Handles billing"),
    ],
    system_prompt=lambda ctx: f"You lead support for {ctx.state.company}. Be helpful.",
)

# Context flows to all workers and their tools
ctx = RunContext(state=AppState(db=db, user_id="u-1", company="Acme"))
result = supervisor.run("Show my open tickets and billing", context=ctx)

# Stream the supervisor's response
async for event in supervisor.astream("Help me", context=ctx):
    if isinstance(event, TextDelta):
        print(event.text, end="")

Connect to FastAIAgent Platform (optional)

import fastaiagent as fa

fa.connect(api_key="fa-...", project="my-project")

# Traces automatically sent to platform dashboard
result = agent.run("Help me")

# Pull versioned prompts from platform
prompt = PromptRegistry().get("support-prompt")

# Publish eval results to platform
results = evaluate(agent, dataset=dataset)
results.publish()

SDK works standalone. Platform adds: production observability, prompt management, evaluation dashboards, team collaboration, HITL approval workflows.

Free tier available →


Install

pip install fastaiagent

With optional integrations:

pip install "fastaiagent[openai]"       # OpenAI auto-tracing
pip install "fastaiagent[langchain]"    # LangChain auto-tracing
pip install "fastaiagent[kb]"           # Local knowledge base
pip install "fastaiagent[all]"          # Everything

Documentation

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

License

Apache 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastaiagent-0.1.4.tar.gz (234.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastaiagent-0.1.4-py3-none-any.whl (107.0 kB view details)

Uploaded Python 3

File details

Details for the file fastaiagent-0.1.4.tar.gz.

File metadata

  • Download URL: fastaiagent-0.1.4.tar.gz
  • Upload date:
  • Size: 234.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for fastaiagent-0.1.4.tar.gz
Algorithm Hash digest
SHA256 6fdfa6c2de60be3ae89dd84b50b370b37c084e65b3f8816b69feb5204f97f283
MD5 127d3f1b5393399b309b285c3665b592
BLAKE2b-256 c2be504e1e19fe7b3583be61e96b119acc1a86368b77958b4b46ad9fa89f1073

See more details on using hashes here.

File details

Details for the file fastaiagent-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: fastaiagent-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 107.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for fastaiagent-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 189cfd166ce6ffdbc3dfb01a88f613c1ef097dacdb35d4cdd4dd287941c31d6d
MD5 5af373f5a53245690029b8bde2d5fd11
BLAKE2b-256 2ca482e3f6c7b5e1d70d1db2de2633caff25ed8906d0eee404ad8e234ca9a6ed

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page