Skip to main content

Versatile Python library designed for evaluating the performance of large language models in Natural Language Processing (NLP) tasks. Developed by Sagacify

Project description

🔮 Sagacify LLM Evaluation ML Library 🔮

Python Version PyPI Documentation Visit Sagacify!

Welcome to the Saga LLM Evaluation ML library, a versatile Python library designed for evaluating the performance of large language models in Natural Language Processing (NLP) tasks. Whether you’re developing language models, chatbots, or other NLP applications, our library provides a comprehensive suite of metrics to help you assess the quality of your language models.

We divided the metrics into three categories: embedding-based, language-model-based, and LLM-based metrics. It is built on top of multiple libraries such as the Hugging Face Transformers library, or LangChain, with some additional metrics and features. You can use the metrics individually or all at once using the Scorer provided by this library, depending on the availability of references, context, and other parameters.

Moreover, the Scorer function provides metafeatures that are extracted from the prompt, prediction, and knowledge via the Elemeta Library. This allows you to monitor the performance of your model based on the structure of the prompt, prediction, and knowledge.

Developed by Sagacify.

Available Metrics

  • Embedding-based Metrics:
    • BERTScore: A metric that measures the similarity between model-generated text and human-generated references. It leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity. It’s a valuable tool for evaluating semantic content. Read more
    • MAUVE: Computes the divergence between the learned distributions from text generated by a text generation model and human-written text. Read more
  • Language-Model-based Metrics:
    • BLEURTScore: A learned evaluation metric for Natural Language Generation. It is built using multiple phases of transfer learning starting from a pre-trained BERT model, employing another pre-training phase using synthetic data, and finally trained on WMT human annotations. Read more
    • Q-Squared: A reference-free metric that aims to evaluate the factual consistency of knowledge-grounded dialogue systems. The approach is based on automatic question generation and question answering. Specifically, it generates questions from the knowledge base and uses the generated questions to evaluate the factual consistency of the generated response. Read more
  • LLM-based Metrics:
    • SelfCheck-GPT (QA approach): A metric that evaluates the correctness of language model outputs by comparing the output to the typical distribution of the model outputs. It introduces a zero-shot approach to fact-check the response of black-box models and assess hallucination problems. Read more
    • G-Eval: A framework that uses LLMs with chain-of-thoughts (CoT) and a form-filling paradigm to assess the quality of NLG outputs. It has been experimented with two generation tasks, text summarization and dialogue generation, and many evaluation criteria. The task and the evaluation criteria may be changed depending on the application. Read more
    • GPT-Score: An evaluation framework that utilizes the emergent abilities (e.g., zero-shot instruction) of generative pre-trained models to score generated texts. Experimental results on four text generation tasks, 22 evaluation aspects, and corresponding 37 datasets demonstrate that this approach can effectively allow us to achieve what one desires to evaluate for texts simply by natural language instructions. Read more
    • Relevance: A metric that evaluates the relevance of the generated text to the user prompt. It uses another LLM to evaluate the relevance of the generated text.
    • Correctness: A metric that evaluates the correctness of the generated text. It uses another LLM to evaluate the correctness of the generated text.
    • Faithfulness: A metric that evaluates the faithfulness of the generated text. It uses another LLM to evaluate the faithfulness of the generated text.
    • NegativeReject: A metric that evaluates the negative rejection of the generated text. It uses another LLM to evaluate the negative rejection of the generated text.
    • HallucinationScore: A metric that evaluates the hallucination of the generated text. It uses another LLM to evaluate the hallucination of the generated text.
  • Retrieval Metrics:
    • Accuracy: A metric that evaluates the accuracy of the retrieved information. It uses another LLM to evaluate the accuracy of the retrieved information.
    • Relevance: A metric that evaluates the relevance of the retrieved information. It uses another LLM to evaluate the relevance of the retrieved information.

Each of these metrics uses either ChatGPT or a quantized LLAMA model by default to evaluate the generated text, but you can define yourself which model you want to use for evaluation, see the Usage section for more information.

Feel free to contribute and make this library even more powerful!
We appreciate your support. 💻💪🏻

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

saga_llm_evaluation-0.11.5.tar.gz (23.7 kB view details)

Uploaded Source

Built Distribution

saga_llm_evaluation-0.11.5-py3-none-any.whl (25.2 kB view details)

Uploaded Python 3

File details

Details for the file saga_llm_evaluation-0.11.5.tar.gz.

File metadata

  • Download URL: saga_llm_evaluation-0.11.5.tar.gz
  • Upload date:
  • Size: 23.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.12 Linux/6.8.0-1014-azure

File hashes

Hashes for saga_llm_evaluation-0.11.5.tar.gz
Algorithm Hash digest
SHA256 2c63240247fee4fc2ef71f0ffb1d68c49da68c12bfe6f756c850bd28047ad24f
MD5 c05b8388acbdaf9a31195fbd671aac96
BLAKE2b-256 e6b13db14e0f6982e75a8a5562ec3dc2a8faaf5e1a05fb45fa05e338904dae88

See more details on using hashes here.

File details

Details for the file saga_llm_evaluation-0.11.5-py3-none-any.whl.

File metadata

File hashes

Hashes for saga_llm_evaluation-0.11.5-py3-none-any.whl
Algorithm Hash digest
SHA256 ebceaded41eaf588dbeee05d5d63a83631617187c3052119a69010497b9cf956
MD5 f3d6b3f60c7d78938c6a0289338c91e3
BLAKE2b-256 368f7820ae2e7d7f990a8703a00d07b33b26015170b7c9e7308fba46a1b81f7d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page