Skip to main content

Core functionality for TLM

Project description

Trustworthy Language Model (TLM)

The Trustworthy Language Model scores the trustworthiness of outputs from any LLM in real-time.

Automatically detect hallucinated/incorrect responses in: Q&A (RAG), Chatbots, Agents, Structured Outputs, Data Extraction, Tool Calling, Classification/Tagging, Data Labeling, and other LLM applications.

Use TLM to:

  • Guardrail AI mistakes before they are served to user
  • Escalate cases where AI is untrustworthy to humans
  • Discover incorrect LLM (or human) generated outputs in datasets/logs
  • Boost AI accuracy

Powered by uncertainty estimation techniques, TLM works out of the box, and does not require:
data preparation/labeling work or custom model training/serving infrastructure.

Learn more and see precision/recall benchmarks with frontier models (from OpenAI, Anthropic, Google, etc):
Blog, Research Paper

Usage

See notebooks for Jupyter notebooks with example usage.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trustworthy_llm-0.0.3.tar.gz (640.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

trustworthy_llm-0.0.3-py3-none-any.whl (80.4 kB view details)

Uploaded Python 3

File details

Details for the file trustworthy_llm-0.0.3.tar.gz.

File metadata

  • Download URL: trustworthy_llm-0.0.3.tar.gz
  • Upload date:
  • Size: 640.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for trustworthy_llm-0.0.3.tar.gz
Algorithm Hash digest
SHA256 f237c9e009954153168fc81f37d329c393ca8570554d38672365a57dc10c72d1
MD5 f8890929fb564fadcd108dd5d56b5147
BLAKE2b-256 5c748eb97abe6174df02f092f444d8efa0ff99ed9fcb52cc0627044f470d4d88

See more details on using hashes here.

Provenance

The following attestation bundles were made for trustworthy_llm-0.0.3.tar.gz:

Publisher: release.yml on cleanlab/tlm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file trustworthy_llm-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for trustworthy_llm-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 1a50c21de704caefc9f0809f58eea748825127e4da30ec7a5690eb8a778b6a7a
MD5 2c7861cd490d0f8ecc09f2f10d262b3e
BLAKE2b-256 b65b27e45279254795f222a7a3c8da9bdfd9ef18b15c88d7f71bbad926540bf5

See more details on using hashes here.

Provenance

The following attestation bundles were made for trustworthy_llm-0.0.3-py3-none-any.whl:

Publisher: release.yml on cleanlab/tlm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page