Skip to main content

Voice AI agent simulator and evaluation harness

Project description

🧪 Agent Simulate SDK

A Python SDK for testing conversational voice AI agents through realistic customer simulations.
Built by Future AGI | Docs | Platform


🚀 Overview

Agent Simulate provides a powerful framework for testing deployed voice AI agents. It automates realistic customer conversations, records the full interaction, and integrates with evaluation pipelines to help you ship high-quality, reliable agents.

  • 📞 Test Deployed Agents: Connect directly to your agent in a LiveKit room.
  • 🎭 Persona-driven Scenarios: Define customer personas, situations, and goals.
  • 🎙️ Full Audio & Transcripts: Capture complete conversation audio and text.
  • 📊 Integrate Evaluations: Use ai-evaluation to score agent performance.

Key Features

Feature Description
Agent Definition Configure the connection to your deployed agent, including room, prompts, and credentials.
Scenario Creation Programmatically define test cases with unique customer personas, situations, and desired outcomes.
Automated Test Runner Orchestrates the simulation, connects the persona to the agent, and manages the conversation flow.
Audio/Transcript Capture Automatically records individual and combined audio tracks, plus a full text transcript.
Evaluation Integration Seamlessly pass test results (audio, transcripts) to the ai-evaluation library for scoring.
Extensible & Customizable Customize STT, TTS, and LLM providers for the simulated customer.

🔧 Installation

# Install through pip 
pip install agent-simulate

If you want to fork the project and install it in editable mode, you can use the following command:

git clone https://github.com/future-agi/agent-simulate.git
cd agent-simulate
pip install -e . # poetry install if you want to use poetry

The project uses Poetry for dependency management.

Download VAD Model Weights

The SDK uses Silero VAD for voice activity detection. Download the required model weights by running this script once:

from livekit.plugins import silero

if __name__ == "__main__":
    print("Downloading Silero VAD model...")
    silero.VAD.load()
    print("Download complete.")

🧑‍💻 Quickstart

1. 🔐 Set Environment Variables

Create a .env file with your credentials:

# LiveKit Server Details
LIVEKIT_URL="wss://your-livekit-server.com"
LIVEKIT_API_KEY="your-api-key"
LIVEKIT_API_SECRET="your-api-secret"

# OpenAI API Key (for the default simulated customer)
OPENAI_API_KEY="your-openai-key"

# Future AGI Evaluation Keys (for running evaluations)
FI_API_KEY="your-fi-api-key"
FI_SECRET_KEY="your-fi-secret-key"

2. ✅ Run a Simulation

This example connects a simulated customer ("Alice") to your deployed agent.

import asyncio
import os
from dotenv import load_dotenv
from fi.simulate import AgentDefinition, Scenario, Persona, TestRunner
from fi.simulate.evaluation import evaluate_report

load_dotenv()

async def main():
    # 1. Define your deployed agent
    agent_definition = AgentDefinition(
        name="my-support-agent",
        url=os.environ["LIVEKIT_URL"],
        room_name="support-room",  # The room where your agent is waiting
        system_prompt="Helpful support agent", # The system prompt for the agent that defines its behavior
    )

    # 2. Create a test scenario
    scenario = Scenario(
        name="Customer Support Test",
        dataset=[
            Persona(
                persona={"name": "Alice", "mood": "frustrated"},
                situation="She cannot log into her account.",
                outcome="The agent should guide her through password reset.",
            ),
        ]
    )

    # 3. Run the test
    runner = TestRunner()
    report = await runner.run_test(
        agent_definition,
        scenario,
        record_audio=True,  # Capture WAV files
    )

    # 4. View results
    for result in report.results:
        print(f"Transcript: {result.transcript}")
        print(f"Combined Audio Path: {result.audio_combined_path}")

    # 5. Evaluate the Report
    # This helper runs evaluations for each test case in the report.
    # Map report fields (e.g., 'transcript') to the inputs required by the eval template.
    evaluated_report = evaluate_report(
        report,
        eval_specs=[
            {
                "eval_templates": ["task_completion"],
                "template_inputs": {"transcript": "transcript"},
            },
            {
                "eval_templates": ["audio_quality"],
                "template_inputs": {"audio": "audio_combined_path"},
            },
        ],
    )

    # View evaluation results
    for result in evaluated_report.results:
        for eval_result in result.evaluation_results:
            print(f"Evaluation for {eval_result.eval_template_name}:")
            print(f"  Score: {eval_result.score}")
            print(f"  Output: {eval_result.output}")


if __name__ == "__main__":
    asyncio.run(main())

🚀 LLM Evaluation with Future AGI Platform

Future AGI delivers a complete, iterative evaluation lifecycle so you can move from prototype to production with confidence:

Stage What you can do
1. Curate & Annotate Datasets Build, import, label, and enrich evaluation datasets in‑cloud. Synthetic‑data generation and Hugging Face imports are built in.
2. Benchmark & Compare Run prompt / model experiments on those datasets, track scores, and pick the best variant in Prompt Workbench or via the SDK.
3. Fine‑Tune Metrics Create fully custom eval templates with your own rules, scoring logic, and models to match domain needs.
4. Debug with Traces Inspect every failing datapoint through rich traces—latency, cost, spans, and evaluation scores side‑by‑side.
5. Monitor in Production Schedule Eval Tasks to score live or historical traffic, set sampling rates, and surface alerts right in the Observe dashboard.
6. Close the Loop Promote real‑world failures back into your dataset, retrain / re‑prompt, and rerun the cycle until performance meets spec.

Everything you need—including SDK guides, UI walkthroughs, and API references—is in the Future AGI docs.

image

🗺️ Roadmap

  • Core Simulation Engine
  • Persona-driven Scenarios
  • Audio & Transcript Recording
  • ai-evaluation Integration Helper
  • Advanced Scenarios (Conversation Graphs)
  • Deeper Performance Metrics (Latency, Interruption Rates)

🤝 Contributing

We welcome contributions! To report issues, suggest templates, or contribute improvements, please open a GitHub issue or PR.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_simulate-0.1.3.tar.gz (30.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_simulate-0.1.3-py3-none-any.whl (36.2 kB view details)

Uploaded Python 3

File details

Details for the file agent_simulate-0.1.3.tar.gz.

File metadata

  • Download URL: agent_simulate-0.1.3.tar.gz
  • Upload date:
  • Size: 30.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.13.2 Darwin/25.2.0

File hashes

Hashes for agent_simulate-0.1.3.tar.gz
Algorithm Hash digest
SHA256 10e40107a789badb2db815ed5a3aebc9e041b9959e81f3cde3d091e59a5624d1
MD5 3dcbde33bfe23793052e21056f536245
BLAKE2b-256 6e36d52f90b60f0f96be48afc0d93a36a08e3163c1030621763e4f7a226e3efa

See more details on using hashes here.

File details

Details for the file agent_simulate-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: agent_simulate-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 36.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.13.2 Darwin/25.2.0

File hashes

Hashes for agent_simulate-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 97755a0b9598849b626905a84b007a6d562de28c02290d5e4b26364209a39d2d
MD5 69d858ebb0838343eebeb12c27f4591d
BLAKE2b-256 188f0591bda8f29a9feccc5d79d3c8e62f6a09d48c54404b3cd2d5f7900b5f0d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page