Skip to main content

The open source post-building layer for Agent Behavior Monitoring.

Project description

Judgment Logo

Agent Behavior Monitoring (ABM)

Track and judge any agent behavior in online and offline setups. Set up Sentry-style alerts and analyze agent behaviors / topic patterns at scale!

Docs Judgment Cloud Self-Host

X LinkedIn

[NEW] 🎆 Agent Reinforcement Learning

Train your agents with multi-turn reinforcement learning using judgeval and Fireworks AI! Judgeval's ABM now integrates with Fireworks' Reinforcement Fine-Tuning (RFT) endpoint, supporting gpt-oss, qwen3, Kimi2, DeepSeek, and more.

Judgeval's agent monitoring infra provides a simple harness for integrating GRPO into any Python agent, giving builders a quick method to try RL with minimal code changes to their existing agents!

await trainer.train(
    agent_function=your_agent_function,  # entry point to your agent
    scorers=[RewardScorer()],  # Custom scorer you define based on task criteria, acts as reward
    prompts=training_prompts  # Tasks
)

That's it! Judgeval automatically manages trajectory collection and reward tagging - your agent can learn from production data with minimal code changes.

👉 Check out the Wikipedia Racer notebook, where an agent learns to navigate Wikipedia using RL, to see Judgeval in action.

You can view and monitor training progress for free via the Judgment Dashboard.

Judgeval Overview

Judgeval is an open-source framework for agent behavior monitoring. Judgeval offers a toolkit to track and judge agent behavior in online and offline setups, enabling you to convert interaction data from production/test environments into improved agents. To get started, try running one of the notebooks below or dive deeper in our docs.

Our mission is to unlock the power of production data for agent development, enabling teams to improve their apps by catching real-time failures and optimizing over their users' preferences.

📚 Cookbooks

Try Out Notebook Description
RL Wikipedia Racer Train agents with reinforcement learning
Online ABM Research Agent Monitor agent behavior in production
Custom Scorers HumanEval Build custom evaluators for your agents
Offline Testing [Get Started For Free] Compare how different prompts, models, or agent configs affect performance across ANY metric

You can access our repo of cookbooks.

You can find a list of video tutorials for Judgeval use cases.

Why Judgeval?

🤖 Simple to run multi-turn RL: Optimize your agents with multi-turn RL without managing compute infrastructure or data pipelines. Just add a few lines of code to your existing agent code and train!

⚙️ Custom Evaluators: No restriction to only monitoring with prefab scorers. Judgeval provides simple abstractions for custom Python scorers, supporting any LLM-as-a-judge rubrics/models and code-based scorers that integrate to our live agent-tracking infrastructure. Learn more

🚨 Production Monitoring: Run any custom scorer in a hosted, virtualized secure container to flag agent behaviors online in production. Get Slack alerts for failures and add custom hooks to address regressions before they impact users. Learn more

📊 Behavior/Topic Grouping: Group agent runs by behavior type or topic for deeper analysis. Drill down into subsets of users, agents, or use cases to reveal patterns of agent behavior.

🧪 Run experiments on your agents: Compare test different prompts, models, or agent configs across customer segments. Measure which changes improve agent performance and decrease bad agent behaviors.

🛠️ Quickstart

Get started with Judgeval by installing our SDK using pip:

pip install judgeval

Ensure you have your JUDGMENT_API_KEY and JUDGMENT_ORG_ID environment variables set to connect to the Judgment Platform.

export JUDGMENT_API_KEY=...
export JUDGMENT_ORG_ID=...

If you don't have keys, create an account for free on the platform!

Start monitoring with Judgeval

from judgeval.tracer import Tracer, wrap
from judgeval.data import Example
from judgeval.scorers import AnswerRelevancyScorer
from openai import OpenAI


judgment = Tracer(project_name="default_project")
client = wrap(OpenAI())  # tracks all LLM calls

@judgment.observe(span_type="tool")
def format_question(question: str) -> str:
    # dummy tool
    return f"Question : {question}"

@judgment.observe(span_type="function")
def run_agent(prompt: str) -> str:
    task = format_question(prompt)
    response = client.chat.completions.create(
        model="gpt-5-mini",
        messages=[{"role": "user", "content": task}]
    )

    judgment.async_evaluate(  # trigger online monitoring
        scorer=AnswerRelevancyScorer(threshold=0.5),  # swap with any scorer
        example=Example(input=task, actual_output=response),  # customize to your data
        model="gpt-5",
    )
    return response.choices[0].message.content

run_agent("What is the capital of the United States?")

Running this code will deliver monitoring results to your free platform account and should look like this:

Judgment Platform Trajectory View

Customizable Scorers Over Agent Behavior

Judgeval's strongest suit is the full customization over the types of scorers you can run online monitoring with. No restrictions to only single-prompt LLM judges or prefab scorers - if you can express your scorer in python code, judgeval can monitor it! Under the hood, judgeval hosts your scorer in a virtualized secure container, enabling online monitoring for any scorer.

First, create a behavior scorer in a file called helpfulness_scorer.py:

from judgeval.data import Example
from judgeval.scorers.example_scorer import ExampleScorer

# Define custom example class
class QuestionAnswer(Example):
    question: str
    answer: str

# Define a server-hosted custom scorer
class HelpfulnessScorer(ExampleScorer):
    name: str = "Helpfulness Scorer"
    server_hosted: bool = True  # Enable server hosting
    async def a_score_example(self, example: QuestionAnswer):
        # Custom scoring logic for agent behavior
        # Can be an arbitrary combination of code and LLM calls
        if len(example.answer) > 10 and "?" not in example.answer:
            self.reason = "Answer is detailed and provides helpful information"
            return 1.0
        else:
            self.reason = "Answer is too brief or unclear"
            return 0.0

Then deploy your scorer to Judgment's infrastructure:

echo "pydantic" > requirements.txt
uv run judgeval upload_scorer helpfulness_scorer.py requirements.txt

Now you can instrument your agent with monitoring and online evaluation:

from judgeval.tracer import Tracer, wrap
from helpfulness_scorer import HelpfulnessScorer, QuestionAnswer
from openai import OpenAI

judgment = Tracer(project_name="default_project")
client = wrap(OpenAI())  # tracks all LLM calls

@judgment.observe(span_type="tool")
def format_task(question: str) -> str:  # replace with your prompt engineering
    return f"Please answer the following question: {question}"

@judgment.observe(span_type="tool")
def answer_question(prompt: str) -> str:  # replace with your LLM system calls
    response = client.chat.completions.create(
        model="gpt-5-mini",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

@judgment.observe(span_type="function")
def run_agent(question: str) -> str:
    task = format_task(question)
    answer = answer_question(task)

    # Add online evaluation with server-hosted scorer
    judgment.async_evaluate(
        scorer=HelpfulnessScorer(),
        example=QuestionAnswer(question=question, answer=answer),
        sampling_rate=0.9  # Evaluate 90% of agent runs
    )

    return answer

if __name__ == "__main__":
    result = run_agent("What is the capital of the United States?")
    print(result)

Congratulations! Your online eval result should look like this:

Custom Scorer Online ABM

You can now run any online scorer in a secure Firecracker microVMs with no latency impact on your applications.


Judgeval is created and maintained by Judgment Labs.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

judgeval-0.23.10.tar.gz (23.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

judgeval-0.23.10-py3-none-any.whl (206.9 kB view details)

Uploaded Python 3

File details

Details for the file judgeval-0.23.10.tar.gz.

File metadata

  • Download URL: judgeval-0.23.10.tar.gz
  • Upload date:
  • Size: 23.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for judgeval-0.23.10.tar.gz
Algorithm Hash digest
SHA256 ad281cd7377b6573d320ac697bb26e675a34a0a91f5b55837169baf6ab50c404
MD5 c1f6a1b753462eda2d9079f19e6608d2
BLAKE2b-256 25e47ba2f17642a1d1363163ed37cdcd68f1f2d432f5e3d04308912ada7afca8

See more details on using hashes here.

File details

Details for the file judgeval-0.23.10-py3-none-any.whl.

File metadata

  • Download URL: judgeval-0.23.10-py3-none-any.whl
  • Upload date:
  • Size: 206.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for judgeval-0.23.10-py3-none-any.whl
Algorithm Hash digest
SHA256 43c9bab58e9125fafd18b50cd6e5e75f9921c5367d63a35e9ece8f2a7ffbdcb2
MD5 d0db540d2e1da6d368cbda979e1ea191
BLAKE2b-256 b22448ecfb8253056955f30df25a255e4a175b29a3a979fbfa54dccca390a6c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page