Skip to main content

UpTrain - tool to evaluate LLM applications on aspects like factual accuracy, response quality, retrieval quality, tonality, etc.

Project description

Logo of UpTrain - an open-source platform to evaluate and improve LLM applications

Try out Evaluations - Read Docs - Quickstart Tutorials - Slack Community - Feature Request

Demo of UpTrain's LLM evaluations with scores for hallucinations, retrieved-context quality, response tonality for a customer support chatbot

UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. We provide grades for 20+ preconfigured checks (covering language, code, embedding use-cases), perform root cause analysis on failure cases and give insights on how to resolve them.


Key Features 🔑

Data Security

All the evaluations and analysis run locally on your system, ensuring that the data never leaves your secure environment (except for LLM calls while using model grading checks)

Experimentation

Experiment with different embedding models like text-embedding-3-large/small, text-embedding-3-ada, baai/bge-large, etc. UpTrain supports HuggingFace models, Replicate endpoints, or custom models hosted on your endpoint.

Cost Effective

By leveraging model grading and introducing an 'Unclear' grade, we are able to leverage GPT-3.5-turbo-1106 as the default evaluator and get high quality yet cost effective scores.

Root Cause Analysis

You can perform root cause analysis on cases with either negative user feedback or low evaluation scores to understand which part of your LLM pipeline is giving suboptimal results. Check out the supported RCA templates.

Configure your own evaluation LLM

We allow you to use any of OpenAI, Anthropic, Mistral, Azure's Openai endpoints or open-source LLMs hosted on Anyscale to be used as evaluator.

Customize Evaluations

UpTrain provides tons of ways to customize evaluations. You can customize evaluation method (chain of thought vs classify), few shot examples, add scenario description, as well as create custom evaluators.

40+ Operators Supported

Support for 40+ operators such as BLEU, ROUGE, Embeddings Similarity, Exact match, etc.

Coming Soon:

  1. Experiment Dashboards
  2. Collaborate with your team
  3. Embedding visualization via UMAP and Clustering
  4. Pattern recognition among failure cases
  5. Prompt improvement suggestions

Pre-built Evaluations We Offer 📝

quality of your responses
Eval Description
Reponse Completeness Grades whether the response has answered all the aspects of the question specified.
Reponse Conciseness Grades how concise the generated response is or if it has any additional irrelevant information for the question asked.
Reponse Relevance Grades how relevant the generated context was to the question specified.
Reponse Validity Grades if the response generated is valid or not. A response is considered to be valid if it contains any information.
Reponse Consistency Grades how consistent the response is with the question asked as well as with the context provided.
quality of retrieved context and response groundedness
Eval Description
Context Relevance Grades how relevant the context was to the question specified.
Context Utilization Grades how complete the generated response was for the question specified given the information provided in the context.
Factual Accuracy Grades whether the response generated is factually correct and grounded by the provided context.
language quality of the response
Eval Description
Language Features Grades whether the response has answered all the aspects of the question specified.
Tonality Grades whether the generated response matches the required persona's tone
language quality of the response
Eval Description
Code Hallucination Grades whether the code present in the generated response is grounded by the context.
conversation as a whole
Eval Description
User Satisfaction Grade the conversations between the user and the LLM/AI assistant.
custom evaluations and others
Eval Description
Custom Guideline Grades how well the LLM adheres to a provided guideline when giving a response.
Custom Prompts Allows you to create your own set of evaluations.
compare responses with ground truth
Eval Description
Response Matching Grades whether the generated response is leaking any system prompt.
safeguard system prompts and avoid LLM mis-use
Eval Description
Prompt Injection Grades whether the generated response is leaking any system prompt.
Jailbreak Detection Grades whether the user's prompt is an attempt to jailbreak (i.e. generate illegal or harmful responses).

Get started 🙌

Install the package through pip:

pip install uptrain

How to use UpTrain:

You can evaluate your responses via the open-source version by providing your OpenAI API key to run evaluations.

from uptrain import EvalLLM, Evals
import json

OPENAI_API_KEY = "sk-***************"

data = [{
    'question': 'Which is the most popular global sport?',
    'context': "The popularity of sports can be measured in various ways, including TV viewership, social media presence, number of participants, and economic impact. Football is undoubtedly the world's most popular sport with major events like the FIFA World Cup and sports personalities like Ronaldo and Messi, drawing a followership of more than 4 billion people. Cricket is particularly popular in countries like India, Pakistan, Australia, and England. The ICC Cricket World Cup and Indian Premier League (IPL) have substantial viewership. The NBA has made basketball popular worldwide, especially in countries like the USA, Canada, China, and the Philippines. Major tennis tournaments like Wimbledon, the US Open, French Open, and Australian Open have large global audiences. Players like Roger Federer, Serena Williams, and Rafael Nadal have boosted the sport's popularity. Field Hockey is very popular in countries like India, Netherlands, and Australia. It has a considerable following in many parts of the world.",
    'response': 'Football is the most popular sport with around 4 billion followers worldwide'
}]

eval_llm = EvalLLM(openai_api_key=OPENAI_API_KEY)

results = eval_llm.evaluate(
    data=data,
    checks=[Evals.CONTEXT_RELEVANCE, Evals.FACTUAL_ACCURACY, Evals.RESPONSE_COMPLETENESS]
)

print(json.dumps(results, indent=3))

If you have any questions, please join our Slack community

Speak directly with the maintainers of UpTrain by booking a call here.


Integrations 🤝

Eval Frameworks LLM Providers LLM Packages Serving frameworks LLM Observability Vector DBs
OpenAI Evals ✅ GPT-3.5-turbo ✅ Langchain 🔜 HuggingFace ✅ Langfuse 🔜 Qdrant ✅
EleutherAI LM Eval 🔜 GPT-4 ✅ Llama Index ✅ Replicate ✅ Helicone 🔜 Pinecone 🔜
BIG-Bench 🔜 Claude ✅ AutoGPT 🔜 AnyScale ✅ Chroma ✅
Cohere ✅ Together ai 🔜
Llama2 ✅ Ollama 🔜
Mistral ✅

Resources 💡

  1. How to evaluate your LLM application
  2. How to detect jailbreaks
  3. Dealing with hallucinations

Why we are building UpTrain 🤔

Having worked with ML and NLP models for the last 8 years, we were continuosly frustated with numerous hidden failures in our models which led to us building UpTrain. UpTrain was initially started as an ML observability tool with checks to identify regression in accuracy.

However we soon released that LLM developers face an even bigger problem -- there is no good way to measure accuracy of their LLM applications, let alone identify regression.

We also saw release of OpenAI evals, where they proposed the use of LLMs to grade the model responses. Furthermore, we gained confidence to approach this after reading how Anthropic leverages RLAIF and dived right into the LLM evaluations research (We are soon releasing a repository of awesome evaluations research).

So, come today, UpTrain is our attempt to bring order to LLM chaos and contribute back to the community. While a majority of developers still rely on intuition and productionise prompt changes by reviewing a couple of cases, we have heard enough regression stories to believe "evaluations and improvement" will be a key part of LLM ecosystem as the space matures.

  1. Robust evaluations allows you to systematically experiment with different configurations and prevent any regressions by helping objectively select the best choice.

  2. It helps you understand where your systems are going wrong, find the root cause(s) and fix them - long before your end users complain and potentially churn out.

  3. Evaluations like prompt injection and jailbreak detection are essential to maintain safety and security of your LLM applications.

  4. Evaluations help you provide transparency and build trust with your end-users - especially relevant if you are selling to enterprises.


Why open-source?

  1. We understand that there is no one-size-fits-all solution when it come to evaluations. We are increasingly seeing the desire from developers to modify the evaluation prompt or set of choices or the few shot examples, etc. We believe the best developer experience lies in open-source, instead of exposing 20 different parameters.

  2. Foster innovation: The field of LLM evaluations and using LLM-as-a-judge is still pretty nascent. We see a lot of exciting research happening, almost on a daily basis and being open-source provides the right platform to us and our community to implement those techniques and innovate faster.


How You Can Help 🙏

We are continuously striving to enhance UpTrain, and there are several ways you can contribute:

  1. Notice any issues or areas for improvement: If you spot anything wrong or have ideas for enhancements, please create an issue on our GitHub repository.

  2. Contribute directly: If you see an issue you can fix or have code improvements to suggest, feel free to contribute directly to the repository.

  3. Request custom evaluations: If your application requires a tailored evaluation, let us know, and we'll add it to the repository.

  4. Integrate with your tools: Need integration with your existing tools? Reach out, and we'll work on it.

  5. Assistance with evaluations: If you need assistance with evaluations, post your query on our Slack channel, and we'll resolve it promptly.

  6. Show your support: Show your support by starring us ⭐ on GitHub to track our progress.

  7. Spread the word: If you like what we've built, give us a shoutout on Twitter!

Your contributions and support are greatly appreciated! Thank you for being a part of UpTrain's journey.


License 💻

This repo is published under Apache 2.0 license and we are committed to adding more functionalities to the UpTrain open-source repo. We also have a managed version if you just want a more hands-off experience. Please book a demo call here.


Provide feedback (Harsher the better 😉)

We are building UpTrain in public. Help us improve by giving your feedback here.


Contributors 🖥️

We welcome contributions to UpTrain. Please see our contribution guide for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

uptrain-0.6.0.tar.gz (193.0 kB view details)

Uploaded Source

Built Distribution

uptrain-0.6.0-py3-none-any.whl (271.0 kB view details)

Uploaded Python 3

File details

Details for the file uptrain-0.6.0.tar.gz.

File metadata

  • Download URL: uptrain-0.6.0.tar.gz
  • Upload date:
  • Size: 193.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for uptrain-0.6.0.tar.gz
Algorithm Hash digest
SHA256 e5bdc9bfd4a151fe1c979dacc5f1b8d076c3d972ced849ac6b81761b38e56691
MD5 8b58bf6f52f9735c2e5d8aa6dcad18c7
BLAKE2b-256 669fee2f276b3090ba9241861926d91656f471737464392fb08e1aad9667cdc4

See more details on using hashes here.

File details

Details for the file uptrain-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: uptrain-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 271.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for uptrain-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 67b58afd78f904dbe4a068caff3da2a9d0f2d0cbb390dce6984f397b58c646e1
MD5 f6dc1c095e2a22b54a76bd4aef51e4da
BLAKE2b-256 afc9f7f1cd4471fb7b2f62b93797bdb1115ccce656bda6a8e8b5fc3cb4f42242

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page