Skip to main content

A new package enables users to provide text inputs and receive reliably structured responses that clearly present key information with confidence indicators, reducing misunderstanding and overconfiden

Project description

verify_response

PyPI version License: MIT Downloads LinkedIn

A Python package that ensures structured, verified, and reliable responses from language models by enforcing strict output formatting and confidence indicators. This package helps reduce ambiguity and overconfidence in AI-generated outputs, making it ideal for applications requiring precise data extraction, summaries, or structured insights.


📦 Installation

Install the package via pip:

pip install verify_response

🚀 Features

  • Structured Outputs: Enforces strict regex-based response formatting to ensure consistency.
  • Confidence Indicators: Provides clear indicators of response reliability.
  • Flexible LLM Support: Works with default ChatLLM7 or any LangChain-compatible LLM.
  • No Multimedia Processing: Focuses solely on text inputs and structured outputs.
  • Transparency: Reduces false confidence by validating output against predefined patterns.

🔧 Usage

Basic Usage (Default LLM7)

from verify_response import verify_response

response = verify_response(user_input="What is the capital of France?")
print(response)  # Structured, verified output

Custom LLM (OpenAI)

from langchain_openai import ChatOpenAI
from verify_response import verify_response

llm = ChatOpenAI()
response = verify_response(user_input="Summarize this text...", llm=llm)
print(response)

Custom LLM (Anthropic)

from langchain_anthropic import ChatAnthropic
from verify_response import verify_response

llm = ChatAnthropic()
response = verify_response(user_input="Extract key points...", llm=llm)
print(response)

Custom LLM (Google Generative AI)

from langchain_google_genai import ChatGoogleGenerativeAI
from verify_response import verify_response

llm = ChatGoogleGenerativeAI()
response = verify_response(user_input="Analyze this data...", llm=llm)
print(response)

🔑 API Key Configuration

Default (LLM7 Free Tier)

The package defaults to ChatLLM7 with the API key loaded from the environment variable LLM7_API_KEY. If not set, it falls back to a default key (not recommended for production).

Custom API Key

Pass your API key directly or via environment variable:

# Directly
verify_response(user_input="...", api_key="your_llm7_api_key")

# Via environment variable
export LLM7_API_KEY="your_llm7_api_key"
verify_response(user_input="...")

Get a free API key: LLM7 Token Registration


📝 Parameters

Parameter Type Description
user_input str The input text to process.
api_key Optional[str] LLM7 API key (defaults to LLM7_API_KEY env var).
llm Optional[BaseChatModel] Custom LangChain LLM (e.g., ChatOpenAI, ChatAnthropic). Defaults to ChatLLM7.

📊 Rate Limits

The default ChatLLM7 free tier supports most use cases. For higher limits, use your own API key or upgrade via LLM7.


📜 License

MIT


📢 Support & Issues

For bugs or feature requests, open an issue on GitHub.


👤 Author

Eugene Evstafev 📧 hi@euegne.plus 🔗 GitHub: chigwell


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

verify_response-2025.12.21152239.tar.gz (4.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

verify_response-2025.12.21152239-py3-none-any.whl (5.4 kB view details)

Uploaded Python 3

File details

Details for the file verify_response-2025.12.21152239.tar.gz.

File metadata

File hashes

Hashes for verify_response-2025.12.21152239.tar.gz
Algorithm Hash digest
SHA256 4c2cbf1aaf3a3b8a6747b033430434fab83c3efcb1404b3f2f84b921edc32920
MD5 e0bf17efd40da026e4a3d860e7509a61
BLAKE2b-256 4776947d93f4f17b0e61109d1d531d6ca642cbc246de6112916e3b48b7457578

See more details on using hashes here.

File details

Details for the file verify_response-2025.12.21152239-py3-none-any.whl.

File metadata

File hashes

Hashes for verify_response-2025.12.21152239-py3-none-any.whl
Algorithm Hash digest
SHA256 7d8edf4e31ca0d059fdda945ff85402147de2bb37f0d7e683850dd149cd086d0
MD5 20501b25b3a25c4df2fbc8a3fc9b5864
BLAKE2b-256 14d584e76f19bd71322b7b7f11756bd6f60fd094709892d9d2d90a6717fd17ef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page