Skip to main content

The open-source evaluation framework for AI agents — test, compare, ship with confidence

Project description

LitmusAI

CI PyPI Python 3.10+ License: MIT

Test framework for AI agents. Think pytest for LLMs — assertions, cost tracking, safety scanning.

pip install litmuseval

Why I built this

I was switching between GPT-4.1 and Claude, tweaking prompts, changing architectures — and I had no idea if anything was actually getting better. My "evaluation" was typing questions into a chat window and eyeballing the responses.

So I built this. Now I can run the same tests across models and know exactly what's working.

Quick start

import asyncio
import litmusai
from litmusai import Agent, TestSuite, TestCase, Numeric, Contains, evaluate

litmusai.configure(api_key="sk-...")

agent = Agent.from_openai_chat(model="gpt-4.1")

suite = TestSuite(name="basics")
suite.add_case(TestCase(
    id="math", name="Percentage",
    task="What is 15% of 240?",
    assertions=[Numeric(36, tolerance=0.01)],
))
suite.add_case(TestCase(
    id="fact", name="Author",
    task="Who wrote 1984?",
    assertions=[Contains(["Orwell"])],
))

results = asyncio.run(evaluate(agent, suite))
# ✅ 2/2 passed | $0.0003 | 937ms avg

Or write tests in YAML:

name: my-tests
cases:
  - id: refund
    task: "I want to return my shoes"
    assertions:
      - type: contains
        patterns: ["return", "refund"]
        mode: any
      - type: not_contains
        patterns: ["sorry, I can't help"]
litmus run --suite my-tests.yaml --agent my_agent:agent

What it actually does

Assertions — 15 types, not just substring matching. Numeric extraction (handles "thirty-six"), JSON schema validation, embedding similarity, LLM-as-judge, regex, composable logic (All, AnyOf, AtLeast).

Safety scanning — 46 attack prompts across prompt injection, jailbreaks, PII leaks, bias, hallucination. My GPT-4.1 agent scored 78/100 — it literally replied "HACKED" to one of the injection tests.

Real cost tracking — costs come from actual API responses, not tiktoken estimates. Tiktoken can be off by 10-20%.

Multi-run stats — run the same test 5 times. Turns out some models pass a test 3 out of 5 times. You don't catch that with a single run.

Regression detection — diff two runs and see what changed. Pass rate up, cost down? Ship it.

Some numbers I found interesting

I ran the same suite across models:

Model Pass Rate Cost Cost/Correct
GPT-4.1 100% $0.017 $0.0034
Claude Sonnet 4 100% $0.011 $0.0018
Claude Opus 4 83% $0.043 $0.0085

Opus costs 14x more per correct answer than GPT-4.1 and scores lower. We were using it for months.

Connect your agent

Agent.from_openai_chat(model="gpt-4.1")           # OpenAI / compatible
Agent.from_azure(resource="r", deployment="d")     # Azure
Agent.from_function(my_fn)                         # any async function
Agent.from_url("http://localhost:8000/chat")       # HTTP endpoint
Agent.from_langchain(chain)                        # LangChain
Agent.from_crewai(crew)                            # CrewAI

Pipeline

Run eval + safety + report in one call:

import asyncio
from litmusai import Agent, Pipeline

agent = Agent.from_openai_chat(model="gpt-4.1", api_key="sk-...")

async def main():
    result = await Pipeline(
        agent, "coding",
        safety=True,
        runs=3,
        report="html",
    ).run()

asyncio.run(main())

Profiles

Presets for common scenarios:

litmus run -s coding -a agent:fn --profile quick       # fast iteration
litmus run -s coding -a agent:fn --profile thorough    # 3 runs, strict threshold
litmus run -s coding -a agent:fn --profile benchmark   # 5 runs, temp=0
litmus run -s coding -a agent:fn --profile ci          # strict threshold
litmus profiles                                         # see all

Custom profiles in YAML:

# .litmus/profiles/production.yaml
name: production
runs: 5
safety: true
safety_depth: thorough
threshold: 0.9
report: html

Built-in suites

8 suites, 50 test cases to start with. Not meant to be comprehensive — they're a starting point. Write your own for your domain.

litmus suites                                    # list them
litmus run --suite coding --agent my_agent:agent  # run one

coding · research · safety · planning · customer_support · summarization · instruction_following · tool_use

Custom assertions

from litmusai.assertions import Assertion, AssertionResult, register_assertion

class MaxWords(Assertion):
    def __init__(self, limit: int):
        self.limit = limit

    def check(self, response: str, **kwargs) -> AssertionResult:
        count = len(response.split())
        return AssertionResult(
            passed=count <= self.limit,
            score=min(1.0, self.limit / max(count, 1)),
            reason=f"{count} words (max {self.limit})",
            assertion_type="MaxWords",
        )

register_assertion("max_words", MaxWords)
# Now usable in YAML: { type: max_words, limit: 100 }

CLI

litmus run --suite coding --agent my_agent:agent    # evaluate
litmus run --suite tests.yaml --runs 5              # multi-run
litmus scan --agent my_agent:agent --level thorough  # safety scan
litmus diff --before run1.json --after run2.json     # compare runs
litmus report -r results.json --html report.html     # generate report
litmus init                                          # scaffold project

CI/CD

# .github/workflows/eval.yml
- uses: kutanti/litmusai@v1
  with:
    agent: my_agent:agent
    suite: coding
    threshold: 0.8

Development

git clone https://github.com/kutanti/litmusai.git
cd litmusai && pip install -e ".[dev]"
pytest                    # 729 tests
ruff check src/ tests/    # lint
mypy src/litmusai/        # types

~9K lines of code, 35 source files. MIT licensed.

License

MIT — Kunal Tanti

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litmuseval-0.2.1.tar.gz (83.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litmuseval-0.2.1-py3-none-any.whl (99.9 kB view details)

Uploaded Python 3

File details

Details for the file litmuseval-0.2.1.tar.gz.

File metadata

  • Download URL: litmuseval-0.2.1.tar.gz
  • Upload date:
  • Size: 83.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for litmuseval-0.2.1.tar.gz
Algorithm Hash digest
SHA256 b294735fd2cc564f57c13647afbfcab7e6f13a7e0894032bcde3f865030430f5
MD5 fd6f59146d0308262ace1134bdf3a309
BLAKE2b-256 ee59b26b8088499d73e69e8ec44c937b5e9aac7003d6b67a2c72240ff22a357c

See more details on using hashes here.

Provenance

The following attestation bundles were made for litmuseval-0.2.1.tar.gz:

Publisher: publish.yml on kutanti/litmusai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file litmuseval-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: litmuseval-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 99.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for litmuseval-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2c7704fbd4c0be920125fb7bf0ae913b5adfdc02dbc8e496aba65f0fad9f7306
MD5 010c48ee47c9a18ae79e328f0e6a2247
BLAKE2b-256 c1685e4becb01e923b2e8fbf5b3d985cef6bd76a13438285df9a7b7ce63f1f98

See more details on using hashes here.

Provenance

The following attestation bundles were made for litmuseval-0.2.1-py3-none-any.whl:

Publisher: publish.yml on kutanti/litmusai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page