Skip to main content

The open-source evaluation framework for AI agents — test, compare, ship with confidence

Project description

LitmusAI

CI PyPI Python 3.10+ License: MIT

Test framework for AI agents. Think pytest for LLMs — assertions, cost tracking, safety scanning.

pip install litmuseval

Why I built this

Evaluating AI agents by hand doesn't scale. I needed something that could run the same tests across any model and agent workflows and tell me what actually changed — cost, accuracy, safety. This is that.

Quick start

import asyncio
import litmusai
from litmusai import Agent, TestSuite, TestCase, Numeric, Contains, evaluate

litmusai.configure(api_key="sk-...")

agent = Agent.from_openai_chat(model="gpt-4.1")

suite = TestSuite(name="basics")
suite.add_case(TestCase(
    id="math", name="Percentage",
    task="What is 15% of 240?",
    assertions=[Numeric(36, tolerance=0.01)],
))
suite.add_case(TestCase(
    id="fact", name="Author",
    task="Who wrote 1984?",
    assertions=[Contains(["Orwell"])],
))

results = asyncio.run(evaluate(agent, suite))
# ✅ 2/2 passed | $0.0003 | 937ms avg

Or write tests in YAML:

name: my-tests
cases:
  - id: refund
    task: "I want to return my shoes"
    assertions:
      - type: contains
        patterns: ["return", "refund"]
        mode: any
      - type: not_contains
        patterns: ["sorry, I can't help"]
litmus run --suite my-tests.yaml --agent my_agent:agent

What it actually does

Assertions — 15 types, not just substring matching. Numeric extraction (handles "thirty-six"), JSON schema validation, embedding similarity, LLM-as-judge, regex, composable logic (All, AnyOf, AtLeast).

Safety scanning — 46 attack prompts across prompt injection, jailbreaks, PII leaks, bias, hallucination. My GPT-4.1 agent scored 78/100 — it literally replied "HACKED" to one of the injection tests.

Real cost tracking — costs come from actual API responses, not tiktoken estimates. Tiktoken can be off by 10-20%.

Multi-run stats — run the same test 5 times. Turns out some models pass a test 3 out of 5 times. You don't catch that with a single run.

Regression detection — diff two runs and see what changed. Pass rate up, cost down? Ship it.

Some numbers I found interesting

I ran the same suite across models:

Model Pass Rate Cost Cost/Correct
GPT-4.1 100% $0.017 $0.0034
Claude Sonnet 4 100% $0.011 $0.0018
Claude Opus 4 83% $0.043 $0.0085

Opus costs 14x more per correct answer than GPT-4.1 and scores lower. We were using it for months.

Connect your agent

Agent.from_openai_chat(model="gpt-4.1")           # OpenAI / compatible
Agent.from_azure(resource="r", deployment="d")     # Azure
Agent.from_function(my_fn)                         # any async function
Agent.from_url("http://localhost:8000/chat")       # HTTP endpoint
Agent.from_langchain(chain)                        # LangChain
Agent.from_crewai(crew)                            # CrewAI

Pipeline

Run eval + safety + report in one call:

import asyncio
from litmusai import Agent, Pipeline

agent = Agent.from_openai_chat(model="gpt-4.1", api_key="sk-...")

async def main():
    result = await Pipeline(
        agent, "coding",
        safety=True,
        runs=3,
        report="html",
    ).run()

asyncio.run(main())

Profiles

Presets for common scenarios:

litmus run -s coding -a agent:fn --profile quick       # fast iteration
litmus run -s coding -a agent:fn --profile thorough    # 3 runs, strict threshold
litmus run -s coding -a agent:fn --profile benchmark   # 5 runs, temp=0
litmus run -s coding -a agent:fn --profile ci          # strict threshold
litmus profiles                                         # see all

Custom profiles in YAML:

# .litmus/profiles/production.yaml
name: production
runs: 5
safety: true
safety_depth: thorough
threshold: 0.9
report: html

Built-in suites

8 suites, 50 test cases to start with. Not meant to be comprehensive — they're a starting point. Write your own for your domain.

litmus suites                                    # list them
litmus run --suite coding --agent my_agent:agent  # run one

coding · research · safety · planning · customer_support · summarization · instruction_following · tool_use

Custom assertions

from litmusai.assertions import Assertion, AssertionResult, register_assertion

class MaxWords(Assertion):
    def __init__(self, limit: int):
        self.limit = limit

    def check(self, response: str, **kwargs) -> AssertionResult:
        count = len(response.split())
        return AssertionResult(
            passed=count <= self.limit,
            score=min(1.0, self.limit / max(count, 1)),
            reason=f"{count} words (max {self.limit})",
            assertion_type="MaxWords",
        )

register_assertion("max_words", MaxWords)
# Now usable in YAML: { type: max_words, limit: 100 }

CLI

litmus run --suite coding --agent my_agent:agent    # evaluate
litmus run --suite tests.yaml --runs 5              # multi-run
litmus scan --agent my_agent:agent --level thorough  # safety scan
litmus diff --before run1.json --after run2.json     # compare runs
litmus report -r results.json --html report.html     # generate report
litmus init                                          # scaffold project

CI/CD

# .github/workflows/eval.yml
- uses: kutanti/litmusai@v1
  with:
    agent: my_agent:agent
    suite: coding
    threshold: 0.8

Development

git clone https://github.com/kutanti/litmusai.git
cd litmusai && pip install -e ".[dev]"
pytest                    # 803 tests
ruff check src/ tests/    # lint
mypy src/litmusai/        # types

~11K lines of code, 37 source files. MIT licensed.

License

MIT — Kunal Tanti

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litmuseval-0.3.0.tar.gz (96.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litmuseval-0.3.0-py3-none-any.whl (114.9 kB view details)

Uploaded Python 3

File details

Details for the file litmuseval-0.3.0.tar.gz.

File metadata

  • Download URL: litmuseval-0.3.0.tar.gz
  • Upload date:
  • Size: 96.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for litmuseval-0.3.0.tar.gz
Algorithm Hash digest
SHA256 a6e98a2e42295ce9c3c3a9b9457f96af79a505e2d4527eaa3149d2551ccd8709
MD5 d3974729ae0651bf994654f095042b86
BLAKE2b-256 d86c2155169b1cf5336bf937c9d2f73dc6ab4ee65387113be56a8cb751d51c9f

See more details on using hashes here.

Provenance

The following attestation bundles were made for litmuseval-0.3.0.tar.gz:

Publisher: publish.yml on kutanti/litmusai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file litmuseval-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: litmuseval-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 114.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for litmuseval-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5545e89823417f63987b57550bda0702a1eb71b84c53d0455b7a4dcaa1f4af18
MD5 4e0ed61ecf3372dfa7a1a18e2d176285
BLAKE2b-256 389f48bef0a86475e0b34d3fb39a70615ebee227f9bf59edc0cfb1944fed871c

See more details on using hashes here.

Provenance

The following attestation bundles were made for litmuseval-0.3.0-py3-none-any.whl:

Publisher: publish.yml on kutanti/litmusai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page