Skip to main content

A Python package for evaluating LLM application outputs.

Project description

GroundedAI

Overview

The grounded-ai package is a powerful tool developed by GroundedAI to evaluate the performance of large language models (LLMs) and their applications. It leverages our own fine tuned small language models and metric specific adapters to compute various metrics, providing insights into the quality and reliability of LLM outputs. Our models can be found here: https://huggingface.co/grounded-ai

Features

  • Metric Evaluation: Compute a wide range of metrics to assess the performance of LLM outputs, including:

    • Factual accuracy
    • Relevance to the given context
    • Potential biases or toxicity
    • Hallucination
  • Small Language Model Integration: Utilize state-of-the-art small language models, optimized for efficient evaluation tasks, to analyze LLM outputs accurately and quickly.

  • Adapter Support: Leverage GroundedAI's proprietary adapters, such as the phi3-toxicity-judge adapter, to fine-tune the small language models for specific domains, tasks, or evaluation criteria, ensuring tailored and precise assessments.

  • Flexible Input/Output Handling: Accept LLM outputs in various formats (text, JSON, etc.) and provide evaluation results in a structured and easily consumable manner.

Getting Started

Install the grounded-ai package from PyPI:

pip install grounded-ai==1.0.3

Example Usage: Toxicity Evaluation

The ToxicityEvaluator class is used to evaluate the toxicity of a given text. Here's an example of how to use it:

from grounded_ai.evaluators.toxicity_evaluator import ToxicityEvaluator

toxicity_evaluator = ToxicityEvaluator(quantization=True)
toxicity_evaluator.warmup()
data = [
    "That guy is so stupid and ugly",
    "Bunnies are the cutest animals in the world"
]
response = toxicity_evaluator.evaluate(data)
# Output
# {'toxic': 1, 'non-toxic': 1, 'percentage_toxic': 50.0}

In this example, we initialize the ToxicityEvaluator. The quantization parameter is optionally set to True to enable quantization for faster inference with less memory.

We then load the base model and the GroundedAI adapter using the warmup() method.

Next, we define a list of texts (data) that we want to evaluate for toxicity.

Finally, we call the evaluate method with the data list, and it returns a dictionary containing the number of toxic and non-toxic texts, as well as the percentage of toxic texts.

In the output, we can see that out of the two texts, one is classified as toxic, and the other as non-toxic, resulting in a 50% toxicity percentage.

Documentation

Detailed documentation, including API references, examples, and guides, coming soon at https://groundedai.tech/api.

Contributing

We welcome contributions from the community! If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request on the GroundedAI GitHub repository.

License

The grounded-ai package is released under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grounded_ai-1.0.5.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

grounded_ai-1.0.5-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file grounded_ai-1.0.5.tar.gz.

File metadata

  • Download URL: grounded_ai-1.0.5.tar.gz
  • Upload date:
  • Size: 10.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.5

File hashes

Hashes for grounded_ai-1.0.5.tar.gz
Algorithm Hash digest
SHA256 171f652befdb00fa45db24ba1abaefd5c52868a11514349ec3be6cfa54a52e14
MD5 247e8a7f5b651ff1b63e6a8f1756b692
BLAKE2b-256 bef69a4876663d5445a0f439fb3a0902e522bf611d4a42621153629e5b6f5d0a

See more details on using hashes here.

File details

Details for the file grounded_ai-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: grounded_ai-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 13.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.5

File hashes

Hashes for grounded_ai-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 c20babf524414c598860a68897d9e451b5e80b66244f572a7c6da341aa34e6dc
MD5 b228436516caec144b270d0554de9915
BLAKE2b-256 a87d0d760ce8cdbb7991e34bc063abdfffa0a26019f116926a78dc966921223f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page