LevelApp is an evaluation framework for AI/LLM-based software application. [Powered by Norma]
Project description
LevelApp: AI/LLM Evaluation Framework for Regression Testing
Overview
LevelApp is an evaluation framework designed for regression testing (black-box) of already built LLM-based systems in production or testing phases. It focuses on assessing the performance and reliability of AI/LLM applications through simulation and comparison modules. Powered by Norma.
Key benefits:
- Configuration-driven: Minimal coding required; define evaluations via YAML files.
- Supports LLM-as-a-judge for qualitative assessments and quantitative metrics for metadata evaluation.
- Modular architecture for easy extension to new workflows, evaluators, and repositories.
Features
- Simulator Module: Evaluates dialogue systems by simulating conversations using predefined scripts. It uses an LLM as a judge to score replies against references and supports metrics (e.g., Exact, Embedded, Token-based, Fuzzy) for comparing extracted metadata to ground truth.
- Comparator Module: Evaluates metadata extraction from JSON outputs (e.g., from legal/financial document processing with LLMs) by comparing against reference/ground-truth data.
- Configuration-Based Workflow: Users provide YAML configs for endpoints, parameters, data sources, and metrics, reducing the need for custom code.
- Supported Workflows: SIMULATOR, COMPARATOR, ASSESSOR (coming soon!).
- Repositories: FIRESTORE, FILESYSTEM, MONGODB.
- Evaluators: JUDGE, REFERENCE, RAG.
- Metrics: Exact, Levenshtein, and more (see docs for full list).
- Data Sources: Local or remote JSON for conversation scripts.
Installation
Install LevelApp via pip:
pip install levelapp
Prerequisites
- Python 3.12 or higher.
- API keys for LLM providers (e.g., OpenAI, Anthropic) if using external clients—store in a
.envfile. - Optional: Google Cloud credentials for Firestore repository.
- Dependencies are automatically installed, including
openai,pydantic,numpy, etc. (seepyproject.tomlfor full list).
Configuration
LevelApp uses a YAML configuration file to define the evaluation setup. Create a workflow_config.yaml with the following structure:
process:
project_name: "test-project"
workflow_type: SIMULATOR # Pick one of the following workflows: SIMULATOR, COMPARATOR, ASSESSOR.
evaluation_params:
attempts: 1 # Add the number of simulation attempts.
batch_size: 5
evaluation:
evaluators: # Select from the following: JUDGE, REFERENCE, RAG.
- JUDGE
- REFERENCE
providers:
- openai
- ionos
- mistral
- grok
- gemini
metrics_map:
field_1: EXACT
field_2 : LEVENSHTEIN
reference_data:
path: "../data/conversation_example_1.json"
data:
endpoint:
name: conversational-agent
base_url: http://127.0.0.1:8000
path: /v1/chat
method: POST
timeout: 60
retry_count: 3
retry_backoff: 0.5
headers:
- name: model_id
value: meta-llama/Meta-Llama-3-8B-Instruct
secure: false
- name: x-api-key
value: API_KEY # Load from .env file using python-dotenv.
secure: true
- name: Content-Type
value: application/json
secure: false
request_schema:
# Static field to be included in every request.
- field_path: message.source
value: system
value_type: static
required: true
# Dynamic field to be populated from runtime context.
- field_path: message.text
value: message_text # the key from the runtime context.
value_type: dynamic
required: true
# Env-based field (from OS environment variables).
- field_path: metadata.env
value: ENV_VAR_NAME
value_type: env
required: false
response_mapping:
# Map the response fields that will be extracted.
- field_path: reply.text
extract_as: agent_reply # The simulator requires this key: 'agent_reply'.
- field_path: reply.metadata
extract_as: generated_metadata # The simulator requires this key: 'generated_metadata'.
- field_path: reply.guardrail_flag
extract_as: guardrail_flag # The simulator requires this key: 'guardrail_flag'.
repository:
type: FIRESTORE # Pick one of the following: FIRESTORE, FILESYSTEM
project_id: "(default)"
database_name: ""
- Endpoint Configuration: Define how to interact with your LLM-based system (base URL, headers, request/response payload schema).
- Placeholders: For dynamic request schema fields, use the values ('value') to dynamically populate these fields during runtime (e.g.,
context = {'message_text': "Hello, world!"}). - Secrets: Store API keys in
.envand load viapython-dotenv(e.g.,API_KEY=your_key_here).
For conversation scripts (used in Simulator), provide a JSON file with this schema:
{
"scripts": [
{
"variable_request_schema": false,
"interactions": [
{
"user_message": "Hello, I would like to book an appointment with a doctor.",
"reference_reply": "Sure, I can help with that. Could you please specify the type of doctor you need to see?",
"interaction_type": "initial",
"reference_metadata": {},
"guardrail_flag": false,
"request_payload": {}
},
{
"user_message": "I need to see a cardiologist.",
"reference_reply": "When would you like to schedule your appointment?",
"interaction_type": "intermediate",
"reference_metadata": {},
"guardrail_flag": false,
"request_payload": {}
},
{
"user_message": "I would like to book it for next Monday morning.",
"reference_reply": "We have an available slot at 10 AM next Monday. Does that work for you?",
"interaction_type": "intermediate",
"reference_metadata": {
"appointment_type": "Cardiology",
"date": "next Monday",
"time": "10 AM"
},
"guardrail_flag": false,
"request_payload": {}
},
{
"id": "f4f2dd35-71d7-4b75-ba2b-93a4f546004a",
"user_message": "Yes, please book it for 10 AM then.",
"reference_reply": "Your appointment with the cardiologist is booked for 10 AM next Monday. Is there anything else I can help you with?",
"interaction_type": "final",
"reference_metadata": {},
"guardrail_flag": false,
"request_payload": {}
}
],
"description": "A conversation about booking a doctor appointment.",
"details": {
"context": "Booking a doctor appointment"
}
}
]
}
- Fields:
- Scripts Level:
- description: a brief description of the script.
- details: any additioanl information.
- variable_request_schema: a flag variable that defaults to
False. When changed to True, it allows the user to pass the request payload content directly from the reference file ignoring any configuration made in the YAML. - Interactions: A list of single-turn conversation data for the simulation and evaluation process:
- user_message_path: If
variable_request_schemaisTrue, the user must indicate the path of the user message in the attached request_payload dict. Example:"user_message_path": "user.message"for "request_payload":{"user": {"message": Hello, world!", "role": "user"}}. - user_message: The text content that will be used as a user message for the simulation,
- reference_reply: the text content of the reference reply.
- reference_metadata: a dict containing the reference metadata.
- guardrail flags: Guardrail flag (
True/False). - request payloads: A dict containing the request payload that must be sent for each turn.
- user_message_path: If
- Scripts Level:
In the .env you need to add the LLM providers credentials that will be used for the evaluation process.
# Add the API key for any used provider:
OPENAI_API_KEY=
IONOS_API_KEY=
ANTHROPIC_API_KEY=
MISTRAL_API_KEY=
GEMINI_API_KEY=
GROK_API_KEY=
# Include the model of choice for any used provider:
OPENAI_MODEL= "gpt-4o-mini"
GROK_MODEL = "llama-3.3-70b-versatile"
GEMINI_MODEL = "gemini-2.5-flash"
# For IONOS, you must include the base URL and the model ID.
IONOS_BASE_URL="https://openai.inference.de-txl.ionos.com"
IONOS_MODEL_ID="meta-llama/Llama-3.3-70B-Instruct"
Usage Example
To run an evaluation:
- Prepare your YAML config and JSON data files.
- Use the following Python script:
if __name__ == "__main__":
from levelapp.workflow import WorkflowConfig
from levelapp.core.session import EvaluationSession
# Load configuration from YAML
config = WorkflowConfig.load(path="../data/workflow_config.yaml")
# Run evaluation session (You can enable/disable the monitoring aspect)
with EvaluationSession(session_name="test-session-1", workflow_config=config, enable_monitoring=False) as session:
session.run()
results = session.workflow.collect_results()
print("Results:", results)
stats = session.get_stats()
print(f"session stats:\n{stats}")
Alternatively, if you want to pass the configuration and reference data from in-memory variables, you can manually load the data like the following:
if __name__ == "__main__":
from levelapp.workflow import WorkflowConfig
from levelapp.core.session import EvaluationSession
config_dict = {
"process": {
"project_name": "test-project",
"workflow_type": "SIMULATOR", # Pick one of the following workflows: SIMULATOR, COMPARATOR, ASSESSOR.
"evaluation_params": {
"attempts": 1, # Add the number of simulation attempts.
}
},
"evaluation": {
"evaluators": ["JUDGE", "REFERENCE"], # Select from the following: JUDGE, REFERENCE, RAG.
"providers": ["openai", "ionos"],
"metrics_map": {
"field_1": "EXACT",
"field_2": "LEVENSHTEIN"
}
},
"reference_data": {
"path": "../data/conversation_example_1.json",
"data": None
},
"endpoint": {
"name": "conversational-agent",
"base_url": "http://127.0.0.1:8000",
"path": "/v1/chat",
"method": "POST",
"timeout": 60,
"retry_count": 3,
"retry_backoff": 0.5,
"headers": [
{
"name": "model_id",
"value": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"secure": False
},
{
"name": "x-api-key",
"value": "API_KEY", # Load from .env file using python-dotenv.
"secure": True
},
{
"name": "Content-Type",
"value": "application/json",
"secure": False
}
],
"request_schema": [
{
"field_path": "message.source",
"value": "system",
"value_type": "static",
"required": True
},
{
"field_path": "message.text",
"value": "message_text", # the key from the runtime context.
"value_type": "dynamic",
"required": True
},
{
"field_path": "metadata.env",
"value": "ENV_VAR_NAME",
"value_type": "env",
"required": False
}
],
"response_mapping": [
{
"field_path": "reply.text",
"extract_as": "agent_reply" # Remember that the simulator requires this key: 'agent_reply'.
},
{
"field_path": "reply.metadata",
"extract_as": "agent_reply" # Remember that the simulator requires this key: 'agent_reply'.
},
{
"field_path": "reply.guardrail_flag",
"extract_as": "metadata" # Remember that the simulator requires this key: 'agent_reply'.
}
]
},
"repository": {
"type": "FIRESTORE", # Pick one of the following: FIRESTORE, FILESYSTEM
"project_id": "(default)",
"database_name": ""
}
}
content = {
"scripts": [
{
"interactions": [
{
"user_message": "Hello!",
"reference_reply": "Hello, how can I help you!"
},
{
"user_message": "I need an apartment",
"reference_reply": "sorry, but I can only assist you with booking medical appointments."
},
]
},
]
}
# Load configuration from a dict variable
config = WorkflowConfig.from_dict(content=config_dict)
# Load reference data from dict variable
config.set_reference_data(content=content)
evaluation_session = EvaluationSession(
session_name="test-session",
workflow_config=config,
enable_monitoring=True # To disable the monitoring aspect, set this to False.
)
with evaluation_session as session:
# Optional: Run connectivity test before the full evaluation
test_results = session.run_connectivity_test(
context={"user_message": "I want to book an appointment with a dentist."}
)
print(f"Connectivity Test Results:\n{test_results}\n---")
session.run()
results = session.workflow.collect_results()
print("Results:", results)
stats = session.get_stats()
print(f"session stats:\n{stats}")
- This loads the config, runs the specified workflow (e.g., Simulator), collects results, and prints stats.
For more examples, see the examples/ directory.
Or, Check the following Colab Notebook for an easy and quick demo:
| Notebook | Description | |
|---|---|---|
| Quick-tour of LevelApp framework | Tutorial Notebook with UI widgets |
Visualization
LevelApp includes powerful visualization capabilities to help you analyze and present evaluation results through interactive charts and dashboards.
Features
- Automatic Dashboard Generation: Create comprehensive HTML dashboards with all evaluation metrics
- Multi-Format Export: Export visualizations in HTML and PNG formats.
- Interactive Charts: Generate interactive Plotly charts for detailed analysis
- Provider Comparison: Compare performance across different LLM providers
- Score Trends: Visualize score trends across conversation scripts
- Distribution Analysis: Analyze score distributions for individual providers
- Summary Metrics: Display key performance indicators and statistics
Installation
To use visualization features, install the required dependencies:
pip install plotly kaleido jinja2
These dependencies enable:
plotly: Interactive chart generationkaleido: Static image export (PNG, PDF)jinja2: HTML dashboard templating
Basic Usage
Generate visualizations directly from an evaluation session:
from levelapp.core.session import EvaluationSession
from levelapp.workflow import WorkflowConfig
# Load configuration
config = WorkflowConfig.load(path="workflow_config.yaml")
# Run evaluation with visualization
with EvaluationSession(
session_name="my-evaluation",
workflow_config=config,
enable_monitoring=True
) as session:
# Run the evaluation
session.run()
# Generate visualizations
files = session.visualize_results(
output_dir="./visualization_output",
formats=["html", "png"]
)
# Access generated files
print(f"Dashboard: {files['html']}")
print(f"Charts: {files['png']}")
Available Chart Types
- Provider Comparison: Bar charts comparing average scores across LLM providers
- Score Trend: Line charts showing score progression across conversation scripts
- Score Distribution: Histograms showing score distribution for specific providers
- Summary Metrics: Key performance indicators and aggregate statistics
Customization
Customize visualizations by:
- Themes: Choose from Plotly themes (
plotly,plotly_white,plotly_dark,ggplot2,seaborn, etc.) - Export Formats: Select from
htmlorpng. - Output Directory: Specify custom paths for generated files
- Chart Layout: Modify chart properties through the ChartGenerator API
Example with custom theme:
# Use dark theme for all charts
chart_gen = ChartGenerator(theme="plotly_dark")
# Generate with custom settings
files = session.visualize_results(
output_dir="./reports",
formats=["html", "png", "pdf"],
theme="plotly_dark"
)
Example Output
The visualization module generates:
- Interactive HTML Dashboard: Complete evaluation report with all charts and metrics
- Static Images: PNG/PDF exports for presentations and reports
- JSON Data: Raw data export for custom processing
For complete examples, see the examples/visualization_example/ directory.
Documentation
Detailed docs are in the docs/ directory, including API references and advanced configuration.
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository on GitHub.
- Create a feature branch (
git checkout -b feature/new-feature). - Commit changes (
git commit -am 'Add new feature'). - Push to the branch (
git push origin feature/new-feature). - Open a pull request.
Report issues via GitHub Issues. Follow the code of conduct (if applicable).
Acknowledgments
- Powered by Norma.
- Thanks to contributors and open-source libraries like Pydantic, NumPy, and OpenAI SDK.
License
This project is licensed under the MIT License - see the LICENCE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file levelapp-0.1.15.tar.gz.
File metadata
- Download URL: levelapp-0.1.15.tar.gz
- Upload date:
- Size: 340.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
05777503033525613cd52b06908c3adf213c2618c1b9e6e1fdbd5d0fe29633d3
|
|
| MD5 |
ff13b112fc2366657a5a3b8571899ea1
|
|
| BLAKE2b-256 |
545c4e7dc844f4c5c033e378eefb435e15a7977845ebcda5e6a99ab073c3ae55
|
File details
Details for the file levelapp-0.1.15-py3-none-any.whl.
File metadata
- Download URL: levelapp-0.1.15-py3-none-any.whl
- Upload date:
- Size: 97.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d307f730af56e789bcd31fb620ae3ba720bb2d6a72d730f66e6c1abadd8a14cd
|
|
| MD5 |
b4d3526f0e075d557ccd0fbeb3aa01b8
|
|
| BLAKE2b-256 |
5f731dcd24f65fe5be0da3804879cf9d8d7b1746518ae090a0c8ecba2ee542fb
|