Skip to main content

UpTrain - ML Observability and Retraining Framework

Project description

uptrain

An open-source framework to evaluate, test and monitor LLM applications

Docs - Slack Community - Bug Report - Feature Request

PRs Welcome Docs Community Website

UpTrain is a Python framework that ensures your LLM applications are performing reliably by allowing users to check aspects such as correctness, structural integrity, bias, hallucination, etc. UpTrain can be used to:

  1. Validate model's response and safeguard your users against hallucinations, bias, incorrect output formats, etc.
  2. Experiment across multiple model providers, prompt templates, and quantify model's performance.
  3. Monitor your model's performance in production and protect yourself against unwanted drifts

Key Features 💡

Get started 🙌

To run it on your machine, checkout the Quickstart tutorial:

Install the package through pip:

pip install uptrain

Note: Uptrain uses commonly used python libraries like openai-evals and sentence-transformers. To make sure, all the functionalities work, use the uptrain-add command to install the full version of the package.

uptrain-add --feature full

How to define checks:

Say we want to check whether our model's responses contain any grammatical mistakes or not.

# Define your checkset - list of simple checks, dataset file, 
# and api_keys

checkset = CheckSet(
    checks = Check(
        name = "grammar_score",
        operators = [
            GrammarScore(
                col_in_text = "model_response",
                col_out = "grammar_score"
            ),
        ],
        plots = PlotlyChart.Table(title="Grammar scores"),
    ),
    source = JsonReader(fpath = '...')
)
settings = Settings(openai_api_key = '...')

checkset.setup(settings)
checkset.run()

Integrations

Eval Frameworks LLM Providers LLM Packages Serving frameworks
OpenAI Evals ✅ GPT-3.5-turbo ✅ Langchain 🔜 HuggingFace 🔜
EleutherAI LM Eval 🔜 GPT-4 ✅ Llama Index 🔜 Replicate 🔜
BIG-Bench 🔜 Claude 🔜 AutoGPT 🔜
Cohere 🔜

UpTrain in Action

Experimentation

You can use the UpTrain framework to run and compare LLM responses for different prompts, models, LLM chains, etc. Check out the experimentation tutorial to learn more.

Validation

You can use the UpTrain Validation Manager to define checks, retry logic and validate your LLM responses before showing it to your users. Check out the tutorial here.

Monitoring

You can use the UpTrain framework to continuously monitor your model's performance and get real-time insights on how well it is doing on a variety of evaluation metrics. Check out the monitoring tutorial to learn more.

Why UpTrain 🤔?

Large language models are trained over billions of data points and perform really well over a wide variety of tasks. But one thing these models are not good at is being deterministic. Even with the most well-crafted prompts, the model can misbehave for certain inputs, be it hallucinations, wrong output structure, toxic or biased response, irrelevant response, and error modes can be immense.

To ensure your LLM applications work reliably and correctly, UpTrain makes it easy for developers to evaluate the responses of their applications on multiple criteria. UpTrain's evaluation framework can be used to:

  1. Validate (and correct) the response of the model before showing it to the user
  2. Get quantitative measures to experiment across multiple prompts, model providers, etc.
  3. Do unit testing to ensure no buggy prompt or code gets pushed into your production
  4. Monitor your LLM applications in real time and understand when they are going wrong in order to fix them before users complain.

We are constantly working to make UpTrain better. Want a new feature or need any integrations? Feel free to create an issue or contribute directly to the repository.

License 💻

This repo is published under Apache 2.0 license. We are also working towards adding a hosted offering to make setting off eval runs easier - please fill this form to get a waitlist slot.

Stay Updated ☎️

We are continuously adding tons of features and use cases. Please support us by giving the project a star ⭐!

Provide feedback (Harsher the better 😉)

We are building UpTrain in public. Help us improve by giving your feedback here.

Contributors 🖥️

We welcome contributions to UpTrain. Please see our contribution guide for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

uptrain-0.3.3.tar.gz (134.8 kB view details)

Uploaded Source

Built Distribution

uptrain-0.3.3-py3-none-any.whl (192.0 kB view details)

Uploaded Python 3

File details

Details for the file uptrain-0.3.3.tar.gz.

File metadata

  • Download URL: uptrain-0.3.3.tar.gz
  • Upload date:
  • Size: 134.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for uptrain-0.3.3.tar.gz
Algorithm Hash digest
SHA256 3c09bee4c2b5c577677630ee26851082947826951df10aa264c5de1c1ad7b5b3
MD5 98841bc762feda51cd6d864f50210fb1
BLAKE2b-256 01f2fa82d56a9ab42ad2f519805c0398b1578aa205deeb0ab8d2c3d60c4b40e6

See more details on using hashes here.

File details

Details for the file uptrain-0.3.3-py3-none-any.whl.

File metadata

  • Download URL: uptrain-0.3.3-py3-none-any.whl
  • Upload date:
  • Size: 192.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for uptrain-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 420de35b6a790c3845e4cdd9f9bef95703ddd30848f554fa76f50227c47f3357
MD5 fff79e19f5308c439f1ae06de1f997e5
BLAKE2b-256 cc931bb89f7f8a7c2ffdd8dc433f4b1e6dd8f6b7160d7ebff6fbfb4e4f4c2fcd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page