Skip to main content

A Python package for evaluating LLM application outputs.

Project description

GroundedAI

Overview

The grounded-ai package is a powerful tool developed by GroundedAI to evaluate the performance of large language models (LLMs) and their applications. It leverages small language models and adapters to compute various metrics, providing insights into the quality and reliability of LLM outputs.

Features

  • Metric Evaluation: Compute a wide range of metrics to assess the performance of LLM outputs, including:

    • Factual accuracy
    • Relevance to the given context
    • Potential biases or toxicity
    • Hallucination
  • Small Language Model Integration: Utilize state-of-the-art small language models, optimized for efficient evaluation tasks, to analyze LLM outputs accurately and quickly.

  • Adapter Support: Leverage GroundedAI's proprietary adapters, such as the phi3-toxicity-judge adapter, to fine-tune the small language models for specific domains, tasks, or evaluation criteria, ensuring tailored and precise assessments.

  • Flexible Input/Output Handling: Accept LLM outputs in various formats (text, JSON, etc.) and provide evaluation results in a structured and easily consumable manner.

  • Customizable Evaluation Pipelines: Define and configure evaluation pipelines to combine multiple metrics, weights, and thresholds based on your specific requirements.

  • Reporting and Visualization: Generate comprehensive reports and visualizations to communicate evaluation results effectively, facilitating decision-making and model improvement processes.

Getting Started

Install the grounded-ai package:

pip install grounded-ai

Example Usage: Toxicity Evaluation

The ToxicityEvaluator class is used to evaluate the toxicity of a given text. Here's an example of how to use it:

from grounded_ai.evaluators.toxicity_evaluator import ToxicityEvaluator

toxicity_evaluator = ToxicityEvaluator(quantization=True)
toxicity_evaluator.warmup()
data = [
    "That guy is so stupid and ugly",
    "Bunnies are the cutest animals in the world"
]
response = toxicity_evaluator.evaluate(data)
# Output
# {'toxic': 1, 'non-toxic': 1, 'percentage_toxic': 50.0}

In this example, we initialize the ToxicityEvaluator. The quantization parameter is optionally set to True to enable quantization for faster inference with less memory.

We then load the base model and the GroundedAI adapter using the warmup() method.

Next, we define a list of texts (data) that we want to evaluate for toxicity.

Finally, we call the evaluate method with the data list, and it returns a dictionary containing the number of toxic and non-toxic texts, as well as the percentage of toxic texts.

In the output, we can see that out of the two texts, one is classified as toxic, and the other as non-toxic, resulting in a 50% toxicity percentage.

Documentation

Detailed documentation, including API references, examples, and guides, coming soon at https://groundedai.tech/api.

Contributing

We welcome contributions from the community! If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request on the GroundedAI grounded-eval GitHub repository.

License

The grounded-ai package is released under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grounded_ai-0.0.6.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

grounded_ai-0.0.6-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file grounded_ai-0.0.6.tar.gz.

File metadata

  • Download URL: grounded_ai-0.0.6.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.5

File hashes

Hashes for grounded_ai-0.0.6.tar.gz
Algorithm Hash digest
SHA256 4f6749aca9267b206f7e4ed06cb93b210679a499bb67c922f1283ab453873641
MD5 e23c4001b2f8e85993d9e8ff0bd17cb7
BLAKE2b-256 68bb9cb20b2ec33f2cb792c1c62ca2b786a688a13584c622205419840067b6db

See more details on using hashes here.

File details

Details for the file grounded_ai-0.0.6-py3-none-any.whl.

File metadata

  • Download URL: grounded_ai-0.0.6-py3-none-any.whl
  • Upload date:
  • Size: 9.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.5

File hashes

Hashes for grounded_ai-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 461087a8d532f1437c03b4fdf730b546d4eeaa863e888e441791ee051e8b2680
MD5 fec5481b35166856ca21bcf68a0b2f8e
BLAKE2b-256 2b702f924ba9098a9c45ca6bae976856ae39aabd260f63a3dc525dd268c68c4d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page