Library to systematically track and evaluate LLM based applications.
Project description
Welcome to TruLens-Eval!
Don't just vibe-check your llm app! Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, retreivers, knowledge sources and more, TruLens-Eval is the tool you need to understand its performance.
Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application.
Read more about the core concepts behind TruLens including Feedback Functions, The RAG Triad, and Honest, Harmless and Helpful Evals.
TruLens in the development workflow
Build your first prototype then connect instrumentation and logging with TruLens. Decide what feedbacks you need, and specify them with TruLens to run alongside your app. Then iterate and compare versions of your app in an easy-to-use user interface 👇
Installation and Setup
Install the trulens-eval pip package from PyPI.
pip install trulens-eval
Quick Usage
Walk through how to instrument and evaluate a RAG built from scratch with TruLens.
💡 Contributing
Interested in contributing? See our contribution guide for more details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file trulens_eval-0.20.2-py3-none-any.whl
.
File metadata
- Download URL: trulens_eval-0.20.2-py3-none-any.whl
- Upload date:
- Size: 637.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4b9f0074fe564644d161083b95fd2d0a688f69e79d55461796a36186f9a53f03 |
|
MD5 | c872ae995c9af55e0a7b825f10b7dfbd |
|
BLAKE2b-256 | a7283ed2a8251bb39a693b92947ba564ab1ee697b891642b97bb658c7ab3277e |