Skip to main content

⛵️ Know how your agent performs before it goes live.

Project description

⛵️ ArkSim

Simulate multi-turn conversations with your AI agent. Find failures before production.

CI Integration Tests Coverage PyPI Python License Docs GitHub Stars GitHub Issues PRs Welcome 2510.11997

Documentation · Examples · Report a Bug

https://github.com/user-attachments/assets/78706f27-cf49-41c1-8019-9dcbb8abc625

What is ArkSim?

Agents fail in ways that only show up mid-conversation. They misinterpret intent three turns in, call the wrong tool, or hallucinate a policy that does not exist. Single-turn testing misses all of this.

ArkSim generates LLM-powered synthetic users that hold realistic multi-turn conversations with your agent. Each user has a distinct profile, goal, and knowledge level. They push back, ask follow-ups, and behave like real users would.

You define scenarios, ArkSim simulates conversations, then evaluates every turn across metrics like helpfulness, faithfulness, and goal completion. The output is an interactive report showing exactly where your agent broke and why.

ArkSim flow: Scenarios → Simulation → Evaluation → Reports

Quickstart

Have an agent? Test it in 3 commands:

pip install arksim
export OPENAI_API_KEY="your-key"
arksim init
# Edit my_agent.py with your agent logic, then run:
arksim simulate-evaluate config.yaml

This generates config.yaml, scenarios.json, and a starter my_agent.py.

For HTTP or A2A agents: arksim init --agent-type chat_completions or arksim init --agent-type a2a. For Anthropic or Google as the evaluation LLM: pip install "arksim[anthropic]" or pip install "arksim[google]".

Just exploring? Try an example:

pip install arksim
export OPENAI_API_KEY="your-key"
arksim examples
cd examples/e-commerce
arksim simulate-evaluate config.yaml

What you'll see

ArkSim evaluation report showing scores, failure categories, and conversation viewer

The report tells you where your agent is strong and where it breaks. You get per-metric scores, categorized failures, and full conversation transcripts so you can read the exact turns where things went wrong.

Test Your Own Agent

Python class (default)

arksim init generates a my_agent.py with a BaseAgent subclass. Replace the execute() body with your agent logic:

from arksim.simulation_engine.agent.base import BaseAgent
from arksim.simulation_engine.tool_types import AgentResponse

class MyAgent(BaseAgent):
    async def get_chat_id(self) -> str:
        return "unique-id"

    async def execute(self, user_query: str, **kwargs: object) -> str | AgentResponse:
        # Replace with your agent logic
        return "agent response"

Chat Completions endpoint

agent_config:
  agent_type: chat_completions
  agent_name: my-agent
  api_config:
    endpoint: http://localhost:8000/v1/chat/completions

A2A protocol

agent_config:
  agent_type: a2a
  agent_name: my-agent
  api_config:
    endpoint: http://localhost:9999/agent

Write scenarios that match your agent's domain. See the Scenarios documentation for how to define goals, user profiles, and knowledge.

Why ArkSim?

  • Simulation, not just evaluation. Most tools score conversations you already have. ArkSim generates them with synthetic users who push back, ask follow-ups, and behave unpredictably.
  • Multi-turn by default. Every test is a full conversation, not a single prompt. Context loss, tool misuse, and contradictions only show up across turns.
  • Any agent, any framework. Works with 14+ frameworks through Chat Completions, A2A, or direct Python import.
  • Runs in CI. Add it as a quality gate on every PR. Exits non-zero when your agent drops below threshold.
  • Fully open source. Runs on your infrastructure. Your data never leaves.

Integrations

Framework Provider
Claude Agent SDK Anthropic
OpenAI Agents SDK OpenAI
Google ADK Google
LangChain LangChain
LangGraph LangChain
CrewAI CrewAI
Dify Dify
AutoGen Microsoft
LlamaIndex LlamaIndex
Pydantic AI Pydantic
Rasa Rasa
Smolagents Hugging Face
Mastra TypeScript
Vercel AI SDK TypeScript

See examples for end-to-end projects with custom metrics and scenarios.

Learn More

Topic
Evaluation metrics (built-in and custom) Metrics guide
CI integration (pytest and GitHub Actions) CI setup guide
Configuration reference (all YAML settings) Schema reference
Simulation and CLI usage Simulation guide
Web UI for browsing results Overview

Development

git clone https://github.com/arklexai/arksim.git
cd arksim
pip install -e ".[dev]"
pytest tests/

Linting and formatting:

ruff check .
ruff format .

See CONTRIBUTING.md for guidelines.

License

Apache-2.0. See LICENSE.

Citation

@misc{shea2026sage,
      title={SAGE: A Top-Down Bottom-Up Knowledge-Grounded User Simulator for Multi-turn AGent Evaluation},
      author={Ryan Shea and Yunan Lu and Liang Qiu and Zhou Yu},
      year={2026},
      eprint={2510.11997},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.11997},
}

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arksim-0.3.4.tar.gz (2.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arksim-0.3.4-py3-none-any.whl (171.7 kB view details)

Uploaded Python 3

File details

Details for the file arksim-0.3.4.tar.gz.

File metadata

  • Download URL: arksim-0.3.4.tar.gz
  • Upload date:
  • Size: 2.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for arksim-0.3.4.tar.gz
Algorithm Hash digest
SHA256 3f27786549e1e5ea163e2d90be9646c4067c9e4e8cfefb8af8211a293465ba18
MD5 1a59bc4671b094ea48ffbdcfcfe601af
BLAKE2b-256 a28791714bdc8467770eed5e66fefbaa2e310fc1baf4155f3a4dda0e1941f27c

See more details on using hashes here.

Provenance

The following attestation bundles were made for arksim-0.3.4.tar.gz:

Publisher: publish-pypi.yml on arklexai/arksim

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file arksim-0.3.4-py3-none-any.whl.

File metadata

  • Download URL: arksim-0.3.4-py3-none-any.whl
  • Upload date:
  • Size: 171.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for arksim-0.3.4-py3-none-any.whl
Algorithm Hash digest
SHA256 ed86f3eb7369d3f5a818070d0df889f2cd91b4a9aae8fa83a06b426f49457cea
MD5 cd408f92777cb765eb9e44f43fae7166
BLAKE2b-256 f095b3db27ce41f4e15d6ceb2945425dfac0efd72024467ca9ea88e88e33833c

See more details on using hashes here.

Provenance

The following attestation bundles were made for arksim-0.3.4-py3-none-any.whl:

Publisher: publish-pypi.yml on arklexai/arksim

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page