Skip to main content

A lightweight framework for fact-checking AI-generated content

Project description

FactLite 🪶

English | 中文

Give Your LLM a "System 2" Brain with a Single Decorator.

PyPI version License: MIT Python 3.10+


In the last mile of deploying Generative AI, hallucination is the final boss. Heavy frameworks like LangChain introduce too much boilerplate and complexity, while raw API calls offer no safety net.

FactLite is a production-ready, feather-light Python micro-framework designed to solve this exact problem. It enhances your existing LLM calls with an automated, self-correcting evaluation loop, inspired by the top-tier Agentic "Reflexion" Architecture, without forcing you to refactor your codebase.

🚀 Key Features

  • ✨ Zero-Intrusion: Add fact-checking and self-correction to any function with a single @verify decorator. No need to rewrite your existing logic.
  • ⚡️ Async-Native & Concurrency Safe: Built from the ground up to support async/await. The evaluation process runs in a separate thread to prevent blocking your main event loop, making it perfect for high-performance web backends like FastAPI.
  • 🤖 Agentic Workflow: Implements an automated Generate -> Evaluate -> Reflect loop. Your LLM is forced to critique and iteratively improve its own answers until they meet your quality standards.
  • 🧩 Extensible & Pluggable:
    • Bring your own judge! Use the built-in LLMJudge or create your own validation logic (e.g., regex, database lookups, type checks) with CustomJudge.
    • Define your own failure policies. Raise an error, return a safe message, or trigger a webhook with custom FallbackAction.
  • 🌐 Framework Agnostic: FactLite doesn't care how you call your LLM. Whether you're using the openai SDK, anthropic's client, or a simple requests.post call to a local model, as long as it's a Python function that returns a string, FactLite can safeguard it.

📦 Installation

pip install FactLite

🎯 Quick Start: The "Aha!" Moment

See how easy it is to upgrade your existing code from a simple API call to a self-correcting agent.

Before: A standard, unprotected LLM call.

import openai

client = openai.OpenAI(api_key="your-key")

def ask_ai(question: str):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": question}]
    )
    return response.choices[0].message.content

# This might return a factually incorrect answer, and you'd never know.
print(ask_ai("Was Li Bai an emperor in the Song Dynasty?"))

After: Protected by FactLite with a single line of code.

import openai
from FactLite import verify, rules, action

client = openai.OpenAI(api_key="your-key")

# Configure a powerful judge and your API key
config = verify.config(
    rule=rules.LLMJudge(model="gpt-4o-mini", api_key="your-key"),
    max_retries=1
)

@verify(config=config, user_prompt="question") # Just add this decorator!
def ask_ai(question: str):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": question}]
    )
    return response.choices[0].message.content

# Now, the function will automatically correct itself before returning.
print(ask_ai("Was Li Bai an emperor in the Song Dynasty?"))

What you'll see in your console:

10:30:05 - [FactLite] - Generating initial answer...
10:30:08 - [FactLite] - Evaluating answer quality...
10:30:12 - [FactLite] - ❌ Hallucination or error detected: The answer incorrectly states that Li Bai was related to the Song Dynasty. He was a poet from the Tang Dynasty.
10:30:12 - [FactLite] - Triggering reflection and rewrite, attempt 1...
10:30:16 - [FactLite] - Evaluating answer quality...
10:30:19 - [FactLite] - ✅ Correction successful, returning the verified answer!

No, Li Bai was not an emperor in the Song Dynasty. He was a renowned poet who lived during the Tang Dynasty (701-762 AD).

💡 Advanced Usage

Async Support

FactLite automatically detects and supports async functions.

from openai import AsyncOpenAI

async_client = AsyncOpenAI(api_key="your-key")

@verify(config=config, user_prompt="question")
async def ask_ai_async(question: str):
    response = await async_client.chat.completions.create(...)
    return response.choices[0].message.content

# Run it
import asyncio
asyncio.run(ask_ai_async("Tell me about the Tang Dynasty."))

Custom Rules (CustomJudge)

Go beyond LLM-based checks. Enforce any local business logic you can imagine.

def company_policy_judge(prompt, answer):
    # Rule 1: No short answers
    if len(answer) < 50:
        return {"is_pass": False, "feedback": "Answer is too short. Please be more detailed."}
    # Rule 2: Don't mention competitors
    if "Google" in answer:
        return {"is_pass": False, "feedback": "Do not mention competitor names."}
    return {"is_pass": True, "feedback": ""}

@verify(rule=rules.CustomJudge(eval_func=company_policy_judge), user_prompt="prompt")
def ask_support_bot(prompt: str):
    # ... your LLM call
    pass

Web-Enhanced Verification (Web_LLMJudge)

Leverage web search to verify answers against the latest information, perfect for time-sensitive or rapidly evolving topics.

@verify(
    rule=rules.Web_LLMJudge(
        model="gpt-4o-mini",
        max_results=3,  # Number of search results to use
        backend="duckduckgo"  # Search backend
    ),
    user_prompt="question"
)
def ask_ai_about_current_events(question: str):
    # ... your LLM call
    pass

Web_LLMJudge Parameters:

  • model: The OpenAI model to use for evaluation
  • max_results: Number of search results to incorporate (default: 3)
  • backend: Search backend, supports "duckduckgo", "bing", "google" (default: "duckduckgo")
  • proxy: Optional proxy for web search
  • api_key: Optional OpenAI API key (defaults to global openai.api_key)
  • base_url: Optional OpenAI API base URL

Custom Failure Actions (FallbackAction)

Decide exactly what happens when an answer fails all retries.

from FactLite import action

@verify(
    ...,
    on_fail=action.ReturnSafeMessage("I'm sorry, I cannot provide a confident answer to that question at the moment.")
)
def ask_sensitive_question(...):
    pass

@verify(..., on_fail=action.RaiseError())
def ask_critical_question(...):
    pass

🛠️ How It Works

FactLite's @verify decorator wraps your function in a simple yet powerful control loop:

  1. Generate: Your original function is called to produce an initial draft.
  2. Evaluate: The configured rule (e.g., LLMJudge) is invoked to assess the draft.
  3. Reflect & Retry:
    • If the evaluation passes, the answer is returned to the user.
    • If it fails, the feedback is combined with the original prompt to create a "reflection prompt," forcing the LLM to correct its mistake. The process repeats from Step 1 until max_retries is reached.
  4. Fallback: If all retries fail, the configured on_fail action is executed.

🤝 Contributing

Contributions are welcome! Whether it's a new rule, a new fallback action, or a performance improvement, feel free to open an issue or submit a pull request.

📄 License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

factlite-1.1.0.post1.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

factlite-1.1.0.post1-py3-none-any.whl (11.0 kB view details)

Uploaded Python 3

File details

Details for the file factlite-1.1.0.post1.tar.gz.

File metadata

  • Download URL: factlite-1.1.0.post1.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.6

File hashes

Hashes for factlite-1.1.0.post1.tar.gz
Algorithm Hash digest
SHA256 27b3c56f309c2b9ea0eccd672d077b04c9eaf05d1a46e93dd49a7893330b337b
MD5 618a7e0d1748d5cbae5891967535ad0f
BLAKE2b-256 ca75264035597c37c9ddf20c7cbfd5c502e1953d07aec00f167fe931952dd03a

See more details on using hashes here.

File details

Details for the file factlite-1.1.0.post1-py3-none-any.whl.

File metadata

  • Download URL: factlite-1.1.0.post1-py3-none-any.whl
  • Upload date:
  • Size: 11.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.6

File hashes

Hashes for factlite-1.1.0.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 04c1440d130d88ce82834bc955619a3f975a598913be9e20b4f7fb9d3bc574ff
MD5 24aaf4d364ca74f51a1a19967a923d84
BLAKE2b-256 2e74f4c80b20dd90ffdefbab69343ba9aee161e154128bbb4e0863f76139acfb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page