True Lies - Separating truth from AI fiction. A powerful library for detecting LLM hallucinations, validating AI responses, and generating professional HTML reports with interactive dashboards.
Project description
True Lies Validator ๐ญ
The easiest library to validate LLM and chatbot responses
Validates if your LLM or chatbot is telling the truth, remembering context and maintaining coherence. Perfect for automated conversation testing.
๐ Quick Installation
# Install the library
pip install true-lies-validator
# Verify installation
python -c "from true_lies import ConversationValidator, HTMLReporter; print('โ
Installed successfully')"
๐ฆ Current version: 0.8.0 - With interactive HTML reports, improved dashboards, and simplified CI/CD integration
โก Get Started in 2 Minutes
True Lies supports two types of validation:
- Candidate Validation - Validate LLM responses against expected facts and semantic reference
- Multi-turn Conversation Validation - Test if LLMs remember context across conversation turns
Type 1: Candidate Validation (Most Common)
This is the most basic and common way to use True Lies - validate multiple LLM responses against a scenario:
from true_lies import validate_llm_candidates, create_scenario
# Define your test scenario
scenario = create_scenario(
facts={
"patient_name": {"expected": "John Smith", "extractor": "regex", "pattern": r"(?:patient|name):\s*([A-Z][a-z]+\s+[A-Z][a-z]+)"},
"appointment_date": {"expected": "March 15, 2024", "extractor": "regex", "pattern": r"(\w+\s+\d{1,2},\s+\d{4})"},
"doctor": {"expected": "Dr. Johnson", "extractor": "regex", "pattern": r"(Dr\.\s+\w+)"}
},
semantic_reference="Patient John Smith has an appointment with Dr. Johnson on March 15, 2024"
)
# Test multiple LLM responses
candidates = [
"John Smith's appointment with Dr. Johnson is scheduled for March 15, 2024",
"Patient: John Smith. Doctor: Dr. Johnson. Date: March 15, 2024",
"Appointment for John Smith on March 15, 2024 with Dr. Johnson"
]
# Validate all candidates and generate HTML report
result = validate_llm_candidates(
scenario=scenario,
candidates=candidates,
threshold=0.65,
generate_html_report=True,
html_title="Appointment Booking Validation"
)
print(f"๐ Report: {result['html_report_path']}")
print(f"โ
Passed: {result['summary']['passed_count']}/{result['summary']['total_count']}")
๐ก Best Practice: Keep your facts and candidates in separate files (JSON/YAML) for better organization. See our live demo for examples.
Type 2: Multi-turn Conversation Validation
Test if your LLM remembers context across multiple conversation turns:
from true_lies import ConversationValidator
# Create validator
conv = ConversationValidator()
# Turn 1: User reports problem
conv.add_turn_and_report(
user_input="My app doesn't work, I'm user ID 12345",
bot_response="Hello, I'll help you. What error do you see?",
expected_facts={'user_id': '12345', 'issue_type': 'app_not_working'},
title="Turn 1: User reports problem"
)
# Turn 2: User provides details
conv.add_turn_and_report(
user_input="Error 500 on login, email john@company.com",
bot_response="I understand, error 500 on login. Checking your account.",
expected_facts={'error_code': '500', 'email': 'john@company.com'},
title="Turn 2: User provides details"
)
# Test: Does the bot remember everything from previous turns?
final_response = "John (ID 12345), your error 500 will be fixed in 2 hours"
retention = conv.validate_and_report(
response=final_response,
facts_to_check=['user_id', 'error_code', 'email'],
title="Context Retention Test"
)
print(f"๐ Retention Score: {retention['retention_score']:.2f}")
print(f"โ
Facts Retained: {retention['facts_retained']}/{retention['total_facts']}")
๐ Best Practice: External Data Files
For production use, keep your test data in separate files:
scenario.json:
{
"name": "Appointment Booking Test",
"semantic_reference": "Patient John Smith has an appointment with Dr. Johnson on March 15, 2024",
"facts": {
"patient_name": {
"expected": "John Smith",
"extractor": "regex",
"pattern": "(?:patient|name):\\s*([A-Z][a-z]+\\s+[A-Z][a-z]+)"
},
"appointment_date": {
"expected": "March 15, 2024",
"extractor": "regex",
"pattern": "(\\w+\\s+\\d{1,2},\\s+\\d{4})"
}
}
}
candidates.json:
[
"John Smith's appointment with Dr. Johnson is scheduled for March 15, 2024",
"Patient: John Smith. Date: March 15, 2024",
"Appointment for John Smith on March 15"
]
test_appointment.py:
import json
from true_lies import validate_llm_candidates
# Load test data
with open('scenario.json', 'r') as f:
scenario = json.load(f)
with open('candidates.json', 'r') as f:
candidates = json.load(f)
# Run validation
result = validate_llm_candidates(
scenario=scenario,
candidates=candidates,
threshold=0.65,
generate_html_report=True
)
print(f"โ
Passed: {result['summary']['passed_count']}/{result['summary']['total_count']}")
See it in action: Check out our live demo project for a complete example with external data files.
๐ฏ Popular Use Cases
E-commerce
# Customer buying product
conv.add_turn_and_report(
user_input="Hello, I'm Maria, I want to buy a laptop for $1500",
bot_response="Hello Maria! I'll help you with the laptop. Registered email: maria@store.com",
expected_facts={'customer_name': 'Maria', 'product': 'laptop', 'budget': '1500'},
title="Turn 1: Customer identifies themselves"
)
Banking
# Customer requesting loan
conv.add_turn_and_report(
user_input="I'm Carlos, I work at TechCorp, I earn $95,000, I want a loan",
bot_response="Hello Carlos! I'll help you with your loan. Email: carlos@bank.com",
expected_facts={'customer_name': 'Carlos', 'employer': 'TechCorp', 'income': '95000'},
title="Turn 1: Customer requests loan"
)
Technical Support
# User reports problem
conv.add_turn_and_report(
user_input="My app doesn't work, I'm user ID 12345",
bot_response="Hello, I'll help you. What error do you see?",
expected_facts={'user_id': '12345', 'issue_type': 'app_not_working'},
title="Turn 1: User reports problem"
)
๐ง Main Methods
add_turn_and_report() - Add turn with automatic reporting
conv.add_turn_and_report(
user_input="...",
bot_response="...",
expected_facts={'key': 'value'},
title="Turn description"
)
validate_and_report() - Validate retention with automatic reporting
retention = conv.validate_and_report(
response="Bot response to validate",
facts_to_check=['fact1', 'fact2'],
title="Retention Test"
)
print_conversation_summary() - Conversation summary
conv.print_conversation_summary("Conversation Summary")
๐ Supported Fact Types
The library automatically detects these types of information:
- Names: "John", "Maria Gonzalez"
- Emails: "john@company.com", "maria@store.com"
- Phones: "+1-555-123-4567", "(555) 123-4567"
- IDs: "12345", "USER-001", "POL-2024-001"
- Amounts: "$1,500", "1500", "USD 1500"
- Employers: "TechCorp", "Google Inc", "Microsoft"
- Dates: "2024-12-31", "31/12/2024", "December 31, 2024"
- Percentages: "15%", "15 percent", "fifteen percent"
๐จ Automatic Reporting
True Lies handles all the reporting. You only need 3 lines:
# Before (30+ lines of manual code)
print(f"๐ Detailed results:")
for fact in facts:
retained = retention.get(f'{fact}_retained', False)
# ... 25 more lines of manual prints
# After (3 simple lines)
retention = conv.validate_and_report(
response=final_response,
facts_to_check=['fact1', 'fact2'],
title="Retention Test"
)
๐ HTML Reports & Dashboard
Generate professional HTML reports with interactive dashboards in just one line:
๐ Super Simple HTML Reports
from true_lies import validate_llm_candidates, create_scenario
# Define your test scenario
scenario = create_scenario(
facts={
"policy_number": {"expected": "POL-2024-001", "extractor": "regex", "pattern": r"#?(POL-\d{4}-\d{3})"},
"premium_amount": {"expected": "850.00", "extractor": "money"},
"insurance_type": {"expected": "auto insurance", "extractor": "categorical",
"patterns": {"auto insurance": ["auto insurance", "car insurance", "vehicle insurance"]}}
},
semantic_reference="Your auto insurance policy #POL-2024-001 has a premium of $850.00"
)
# Test multiple candidates
candidates = [
"Your auto insurance policy #POL-2024-001 has a premium of $850.00",
"Auto insurance policy POL-2024-001 costs $850.00",
"Policy #POL-2024-001: $850.00 for auto insurance"
]
# Generate HTML report with ONE line! ๐
result = validate_llm_candidates(
scenario=scenario,
candidates=candidates,
threshold=0.65,
generate_html_report=True, # โ This generates the report!
html_title="Insurance Policy Validation Report"
)
print(f"๐ Report saved to: {result['html_report_path']}")
๐จ Interactive Dashboard Features
๐ Real-time Analytics:
- Success Rate Distribution - Centered chart showing pass/fail distribution
- Performance Trend - Historical performance with configurable target line
- Similarity Score Trend - Semantic similarity tracking over time
- Fact Retention Trend - Percentage of facts retained across tests
๐ Interactive Table:
- Sortable columns - Click headers to sort by ID, Score, Status, etc.
- Expandable details - Click "View Details" to see full test information
- Card-style details - Professional styling with smooth transitions
- Real-time filtering - Filter and search through results
๐ Historical Data:
- Automatic data persistence - Results saved to
true_lies_reporting/validation_history.json - Temporal analysis - Track performance over days/weeks/months
- Target control - Set and adjust performance targets dynamically
- Trend visualization - See improvement patterns over time
๐ฏ Key Benefits
- โ One-line report generation - No complex setup required
- โ Automatic data persistence - Historical tracking built-in
- โ Interactive dashboards - Professional charts and visualizations
- โ Real-time sorting - Click to sort any column
- โ Expandable details - Toggle detailed test information
- โ Responsive design - Works on desktop and mobile
- โ Professional styling - Ready for stakeholder presentations
๐ CI/CD Integration
True Lies integrates seamlessly into CI/CD pipelines for automated LLM validation. Here's a complete example based on a real project:
Complete Example with GitHub Actions
1. Project structure:
your-project/
โโโ .github/
โ โโโ workflows/
โ โโโ test-and-report.yml # GitHub Actions workflow
โโโ tests/
โ โโโ test_chatbot.py # Your tests with True Lies
โโโ true_lies_reporting/ # Reports and history (auto-generated)
โโโ requirements.txt # Includes true-lies-validator
2. Test file (tests/test_chatbot.py):
from true_lies import validate_llm_candidates, create_scenario
def test_support_chatbot():
"""Technical support chatbot test"""
scenario = create_scenario(
facts={
"user_id": {"expected": "12345", "extractor": "regex", "pattern": r"ID\s*(\d+)"},
"issue": {"expected": "login", "extractor": "categorical",
"patterns": {"login": ["login", "sign in", "log in"]}}
},
semantic_reference="User ID 12345 reports login problem"
)
candidates = [
"User ID 12345 has a login problem",
"User with ID 12345 cannot sign in to the system",
]
result = validate_llm_candidates(
scenario=scenario,
candidates=candidates,
threshold=0.65,
generate_html_report=True,
html_title="Support Chatbot Test"
)
print(f"โ
Report generated: {result['html_report_path']}")
return result
if __name__ == "__main__":
test_support_chatbot()
3. GitHub Actions Workflow (.github/workflows/test-and-report.yml):
name: LLM Validation with True Lies
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test-and-report:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Install dependencies
run: |
pip install true-lies-validator
pip install -r requirements.txt
- name: Run tests and generate reports
run: |
python tests/test_chatbot.py
- name: Upload HTML reports as artifacts
uses: actions/upload-artifact@v4
with:
name: llm-validation-reports
path: |
*.html
true_lies_reporting/
retention-days: 30
- name: Publish reports to GitHub Pages (optional)
if: github.ref == 'refs/heads/main'
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./
publish_branch: gh-pages
keep_files: false
4. View the reports:
- As artifacts: In GitHub Actions โ Your workflow โ Artifacts โ Download
llm-validation-reports - On GitHub Pages: Configure GitHub Pages and access
https://your-username.github.io/your-repo/ - Live example: Demo True Lies
๐ฏ CI/CD Features with True Lies:
- โ Automatic execution - Tests run on every push/PR
- โ Automatic HTML reports - Generated and saved automatically
- โ
Preserved history - Historical data maintained in
true_lies_reporting/ - โ GitHub Pages publishing - Reports accessible from any browser
- โ Trends and metrics - Dashboards with automatic temporal analysis
- โ No complex setup - Just add the workflow and run your tests
๐ Automatic Metrics
- Retention Score: 0.0 - 1.0 (how well it remembers)
- Facts Retained: X/Y facts remembered
- Evaluation: A, B, C, D, F (automatic grading)
- Details per Fact: What was found and what wasn't
๐ Advanced Validation (Optional)
For more complex cases, you can also use traditional validation with scenarios:
from true_lies import create_scenario, validate_llm_candidates
# Facts that MUST be in the response
facts = {
'policy_number': {'extractor': 'regex', 'expected': 'POL-2024-001', 'pattern': r'POL-\d{4}-\d{3}'},
'premium': {'extractor': 'money', 'expected': '850.00'},
'coverage_type': {'extractor': 'categorical', 'expected': 'auto insurance',
'patterns': {'auto insurance': ['auto insurance', 'car insurance']}}
}
# Reference text for semantic comparison
reference_text = "Your auto insurance policy #POL-2024-001 has a premium of $850.00"
# Create scenario (with automatic fact weighting)
scenario = create_scenario(
facts=facts,
semantic_reference=reference_text,
semantic_mappings={} # Weights are applied automatically
)
# Validate responses
candidates = [
"Policy POL-2024-001 covers your automobile with monthly payments of $850.00",
"Your car insurance policy POL-2024-001 costs $850 monthly"
]
results = validate_llm_candidates(
scenario=scenario,
candidates=candidates,
threshold=0.7,
generate_html_report=True
)
๐ฏ Advanced Features
Automatic Fact Weighting:
- Values in your
expectedfacts are automatically weighted - Significant improvement in similarity scores (+55% in typical cases)
- No additional configuration needed
Improved Polarity Detection:
- Correctly detects negative phrases with "not", "does not", "don't", etc.
- Patterns in English and Spanish
- Avoids false positives with substrings
Optimized Semantic Mappings:
- Use simple and specific mappings
- Avoid over-mapping that can worsen scores
- Recommendation: minimal mappings or no mappings
๐ก Best Practices
1. Fact Configuration:
# โ
CORRECT - For specific numbers
'account_number': {'extractor': 'regex', 'expected': '2992', 'pattern': r'\d+'}
# โ INCORRECT - For specific numbers
'account_number': {'extractor': 'categorical', 'expected': '2992'}
# โ
CORRECT - For categories
'account_type': {'extractor': 'categorical', 'expected': 'savings'}
2. Semantic Mappings:
# โ
CORRECT - Simple mappings
semantic_mappings = {
"account": ["cuenta"],
"balance": ["saldo", "amount"]
}
# โ INCORRECT - Excessive mappings
semantic_mappings = {
"phrases": ["the balance of your", "your term deposit account", ...] # Too aggressive
}
3. Thresholds:
- 0.6-0.7: For strict validation
- 0.5-0.6: For permissive validation
- 0.8+: Only for exact cases
๐ฏ Available Extractors
money: Monetary values ($1,234.56, USD 27, 100 dollars) [[memory:7971937]]number: General numbers (25, 3.14, 1000)categorical: Categorical values with synonyms [[memory:7877404]]email: Email addressesphone: Phone numbershours: Time schedules (9:00 AM, 14:30, 3:00 PM)id: Identifiers (USER-001, POL-2024-001)regex: Custom patterns
๐ง Extractor Improvements
Improved money extractor:
- Prioritizes amounts with currency symbols ($, USD, dollars)
- Avoids capturing non-monetary numbers
- Better accuracy in banking scenarios
- Uses the
moneykey exclusively (notcurrencyor other aliases)
Improved categorical extractor:
- Whole word matches (avoids false positives)
- Better detection of specific patterns
- Compatible with exact expected values
- Domain-agnostic - use categorical patterns for domain-specific needs
๐ฏ Examples & Demos
Available Examples
- Basic HTML Report - Simple report generation
- Advanced Filters Demo - Advanced filtering capabilities
- Temporal Analysis Demo - Temporal analysis features
- Advanced Search Demo - Real-time search functionality
- PDF Export Demo - PDF export capabilities
Real CI/CD Example
- Demo True Lies - Complete project with GitHub Actions
- Live Reports - Reports published on GitHub Pages
๐ ๏ธ Diagnostic Tool
To diagnose similarity and extraction issues:
from diagnostic_tool import run_custom_diagnosis
# Your configuration
fact_configs = {
'account_number': {'extractor': 'regex', 'expected': '2992', 'pattern': r'\d+'},
'balance_amount': {'extractor': 'money', 'expected': '3000.60'}
}
candidates = ["Your account 2992 has $3,000.60"]
# Diagnose
run_custom_diagnosis(
text="The balance of your Term Deposit account 2992 is $3,000.60",
fact_configs=fact_configs,
candidates=candidates
)
๐ Changelog
v0.8.0 (Current) - 2024-12-31
๐จ Interactive Dashboard Improvements:
- โ Interactive expand/collapse functionality for "View Details" buttons
- โ Dynamic button text changes ("View Details" โ "Hide Details")
- โ Visual feedback with button color changes (blue โ red when expanded)
- โ Card-style styling with left border and smooth transitions
- โ Professional styling for detailed test information
๐ Enhanced Analytics & Visualizations:
- โ Similarity Score Trend chart showing semantic similarity over time
- โ Fact Retention Trend chart tracking percentage of facts retained
- โ Performance Trend with configurable target line
- โ
Historical data persistence in
true_lies_reporting/validation_history.json - โ Automatic data cleanup (30-day retention policy)
๐ Simplified HTML Report Generation:
- โ
One-line HTML report generation with
generate_html_report=Trueparameter - โ Automatic file naming with timestamps
- โ
Integration with
validate_llm_candidatesfunction - โ Streamlined API for report generation
๐ง Interactive Table Improvements:
- โ Sortable columns with click-to-sort functionality
- โ Toggle between ascending and descending order
- โ Row filtering to handle inconsistent table structures
- โ Visual sort indicators (โ, โ, โ) on column headers
v0.7.0 - 2024-12-30
- โ HTML Reporter - Professional HTML reports with interactive dashboards
- โ Interactive Charts - Chart.js integration for visual analytics
- โ Advanced Filtering - Real-time search and filtering capabilities
- โ Temporal Analysis - Daily/Weekly/Monthly performance tracking
- โ CI/CD Integration - GitHub Actions, Jenkins, GitLab CI support
v0.6.0 - 2024-12-29
- โ Multi-turn conversation validation
- โ Automatic fact extraction and validation
- โ Comprehensive reporting system
- โ Support for various data types (emails, money, dates, IDs)
๐ค Contributing
Contributions are welcome! Please:
- Fork the project
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- The open source community for inspiration and feedback
True Lies - Where AI meets reality ๐ญ
Have questions? Open an issue or contact the development team.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file true_lies_validator-0.9.2.tar.gz.
File metadata
- Download URL: true_lies_validator-0.9.2.tar.gz
- Upload date:
- Size: 132.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b68a36dc6368f5471dc245c54d7f70aa1c4f55a197e83ec65ddf5cfad769b6f
|
|
| MD5 |
787c6f3c221022413a91298dae592be9
|
|
| BLAKE2b-256 |
bc31f61e515fa4a056891225e67b27ced074b274bbe35457e713ccc3eb10996d
|
File details
Details for the file true_lies_validator-0.9.2-py3-none-any.whl.
File metadata
- Download URL: true_lies_validator-0.9.2-py3-none-any.whl
- Upload date:
- Size: 48.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1ba804f799be68713f7a6ea337243c1ba536f6db477849a53341c0f49892f3b1
|
|
| MD5 |
3a2693c782cb20bb8d56e55e6a3ac246
|
|
| BLAKE2b-256 |
9e1e84ebb4eabe5fffb8d040cc83b9be3660551e9d67ceb75fc3dcd80adceb58
|