groqeval is a simple-to-use, open-source LLM evaluation framework that uses groq for evaluation.
Project description
groqeval
GroqEval is a powerful and easy-to-use evaluation framework designed specifically for language model (LLM) performance assessment. Utilizing the capabilities of Groq API, GroqEval provides developers, researchers, and AI enthusiasts with a robust set of tools to rigorously test and measure the relevance and accuracy of responses generated by language models.
Getting Started
Installation
pip install groqeval
Initialising an evaluator.
from groqeval.evaluate import GroqEval
evaluator = GroqEval(api_key=API_KEY)
The evaluator is the central orchestrator that initializes the metrics.
from groqeval.evaluate import GroqEval
metrics = evaluator(metric_name, **kwargs)
To list all the available metrics
>>> evaluator.list_metrics()
['AnswerRelevance', 'Bias', 'ContextRelevance', 'Faithfulness', 'Hallucination', 'Toxicity']
Answer Relevance
The Answer Relevance metric evaluates how accurately and closely the responses of a language model align with the specific query or prompt provided. This metric ensures that each part of the output, recognized as coherent statements, is scored for its relevance to the original question, helping to gauge the utility and appropriateness of the model's responses.
Calculation
Answer Relevance is calculated by first decomposing the output into individual phrases or chunks. Each phrase or chunk is evaluated to determine if it can be considered a statement. A "statement" is defined as a clear, standalone declarative construct that communicates information, opinions, or beliefs effectively. Each identified statement is then scored on a scale from 1 (completely irrelevant) to 10 (highly relevant) in relation to how well it addresses the prompt. The overall relevance score is computed using the formula: $$answer\textunderscore relevance = \frac{1}{n} [ \sum_{i=1}^{n} \text{score}_i ]$$
where n is the number of output statements. This approach provides a nuanced evaluation of the output's adherence to the prompt based on the clarity and relevancy of its statements.
Usage
prompt = "How is artificial intelligence changing business practices today?"
output = "Artificial intelligence is revolutionizing business practices by enhancing data analysis capabilities and automating routine tasks. However, it's important to note that AI can also pose ethical challenges and may require significant investment in infrastructure. In unrelated news, the local sports team won its championship game last night."
answer_relevance = evaluator("answer_relevance", prompt = prompt, output = output)
answer_relevance.score()
Output
{
"score": 6.333333333333333,
"score_breakdown": {
"scores": [
{
"string": "Artificial intelligence is revolutionizing business practices by enhancing data analysis capabilities and automating routine tasks.",
"rationale": "This statement is highly relevant to the prompt as it directly addresses how AI is changing business practices by improving data analysis and automating tasks.",
"score": 10
},
{
"string": "However, it's important to note that AI can also pose ethical challenges and may require significant investment in infrastructure.",
"rationale": "This statement is relevant to the prompt as it discusses the potential drawbacks of AI in business, including ethical concerns and infrastructure costs. While it doesn't directly describe a change in business practices, it provides important context for understanding AI's impact.",
"score": 8
},
{
"string": "the local sports team won its championship game last night.",
"rationale": "This statement is completely irrelevant to the prompt, as it discusses a sports event and has no connection to artificial intelligence or business practices.",
"score": 1
}
]
}
}
Context Relevance
Context Relevance evaluates the effectiveness of the retriever in a RAG pipeline by measuring the relevance of the retrieved context to the input query. This metric ensures that the context provided to the generator is pertinent and likely to enhance the quality and accuracy of the generated responses.
Calculation
Context Relevance is calculated by first examining each statement of context retrieved by the RAG pipeline's retriever component in response to a specific query. Each statement of context is evaluated to determine if it can be considered a relevant response to the query. A "relevant response" is defined as a clear, contextual piece of information that directly addresses the query's topic or related aspects. Each identified piece of context is then scored on a scale from 1 (completely irrelevant) to 10 (highly relevant) based on how well it relates to the initial query. The overall relevance score is computed using the formula: $$answer\textunderscore relevance = \frac{1}{n} [ \sum_{i=1}^{n} \text{score}_i ]$$
where n is the number of statements from the context evaluated. This method provides a quantitative measure of the retriever's accuracy in sourcing contextually appropriate content that enhances the generator's responses.
Usage
query = "What are the key benefits of using renewable energy?"
retrieved_context = [
"Renewable energy sources such as solar and wind power significantly reduce greenhouse gas emissions.",
"The use of renewable energy can decrease reliance on fossil fuels and promote energy independence."
]
context_relevance = evaluator("context_relevance", context = context, prompt = query)
context_relevance.score()
Output
{
"score": 8.0,
"score_breakdown": {
"scores": [
{
"string": "Increasing use of renewable energy sources is crucial for sustainable development.",
"rationale": "This sentence is highly relevant to the prompt as it highlights the importance of renewable energy in achieving sustainable development, which is a key benefit of using renewable energy.",
"score": 10
},
{
"string": "Solar power and wind energy are among the most efficient renewable sources.",
"rationale": "This sentence is somewhat relevant to the prompt as it mentions specific types of renewable energy, but it does not directly address the benefits of using renewable energy. It provides supporting information, but does not explicitly state a benefit.",
"score": 6
}
]
}
}
Faithfulness
Faithfulness measures how well the outputs of a language model adhere to the facts and data presented in the context it was provided. This metric ensures that the generated content is not only relevant but also accurate and truthful with respect to the given context, critical for maintaining the integrity and reliability of the model's responses.
Calculation
Faithfulness is calculated by first decomposing the output into individual phrases or chunks. Each phrase or chunks is then evaluated to determine if it can be considered a claim in the sense that it is a declarative construct that communicates information, opinions, or beliefs. A phrase is marked as a claim if it forms a clear, standalone declaration. Claims are then scored on a scale from 1 to 10. A score from 1 to 4 is assigned to claims that are either completely false or unsupported by the context, with claims that are factually true but unsupported by the specific context receiving scores close to 4. A score of 5 or above is reserved for claims that are both factually true and corroborated by the context. The overall faithfulness score is calculated using the formula: $$faithfulness\textunderscore score = \frac{1}{n} [ \sum_{i=1}^{n} \text{score}_i ]$$ where n is the number of claims evaluated. This method quantitatively assesses the truthfulness of the output in relation to the context.
Usage
context = [
"Solar energy is becoming increasingly cost-effective, making it a competitive alternative to fossil fuels.",
"Recent advancements have significantly increased the efficiency of solar panels."
]
output = (
"Solar panels have become more efficient due to recent advancements. "
"Solar energy reduces dependency on non-renewable energy sources. "
"Solar panels can now convert more than 90% of sunlight into energy."
)
faithfulness = evaluator("faithfulness", context = context, output = output)
faithfulness.score()
Output
{
"score": 5.0,
"score_breakdown": {
"scores": [
{
"string": "Solar panels have become more efficient due to recent advancements.",
"rationale": "The context explicitly states that 'Recent advancements have significantly increased the efficiency of solar panels.' This claim is entirely supported by the context.",
"score": 10
},
{
"string": "Solar energy reduces dependency on non-renewable energy sources.",
"rationale": "Although this claim is true, it is not directly supported by the context. The context only mentions that solar energy is becoming cost-effective and competitive with fossil fuels, but it does not explicitly state that it reduces dependency on non-renewable energy sources.",
"score": 4
},
{
"string": "Solar panels can now convert more than 90% of sunlight into energy.",
"rationale": "The context does not provide any information about the specific percentage of sunlight that solar panels can convert into energy. This claim is not supported by the context.",
"score": 1
}
]
}
}
Hallucinations
The hallucination metric evaluates the alignment between an output and a given context. Specifically, it assesses the extent to which the output introduces information that is not supported by the context, even if such information may be factually accurate elsewhere. This is crucial for ensuring that the generated outputs remain grounded in the provided context and do not mislead or introduce inaccuracies.
Calculation
The hallucination metric evaluates the alignment between an output and its context, scoring each context statement on a scale from 1 (complete contradiction) to 10 (full alignment). The metric involves decomposing the context into individual statements, scoring the output's alignment with each statement, and then calculating the average score using the formula $$hallucination\textunderscore score = \frac{1}{n} [ \sum_{i=1}^{n} \text{score}_i ]$$
where n is the number of context statements. This average score provides a quantitative measure of the output's adherence to the provided context.
Usage
context = [
"The use of electric vehicles (EVs) has been steadily increasing as part of global efforts to reduce carbon emissions.",
"Electric vehicles help reduce dependency on fossil fuels but face challenges such as limited battery range and long charging times.",
"Many countries are investing in infrastructure to support EV adoption, which includes building more charging stations."
]
output = "Electric vehicles not only reduce carbon emissions but also significantly lower local air pollutants, which improves urban air quality. The latest advancements in battery technology have begun to address the range anxiety associated with EVs by extending their driving range significantly. Moreover, some regions are seeing a surge in ultra-fast charging stations that can charge an EV battery to 80% in just 20 minutes."
hallucination = evaluator("hallucination", context = context, output = output)
hallucination.score()
Output
{
"score": 7.333333333333333,
"score_breakdown": {
"scores": [
{
"string": "The use of electric vehicles (EVs) has been steadily increasing as part of global efforts to reduce carbon emissions.",
"rationale": "The output aligns with the context as it also mentions the reduction of carbon emissions. The context sets the stage for the benefits of EVs, and the output builds upon that by highlighting their positive impact on urban air quality.",
"score": 9
},
{
"string": "Electric vehicles help reduce dependency on fossil fuels but face challenges such as limited battery range and long charging times.",
"rationale": "The output partially aligns with the context as it addresses the challenges of EVs, but the context presents these challenges as ongoing issues, whereas the output implies that they are being addressed. The output's assertion that battery technology has begun to address range anxiety contradicts the context's presentation of limited battery range as a challenge.",
"score": 4
},
{
"string": "Many countries are investing in infrastructure to support EV adoption, which includes building more charging stations.",
"rationale": "The output aligns with the context as it mentions the surge in ultra-fast charging stations, which is a form of investment in infrastructure to support EV adoption. The context sets the stage for the growth of EV infrastructure, and the output builds upon that by highlighting the benefits of this growth.",
"score": 9
}
]
}
}
Bias
Bias Metric evaluates the extent of bias present in the responses generated by a language model to a given prompt. It aims to identify and quantify bias in opinions expressed in the model’s output, ranging from explicit statements to subtler, context-driven expressions. This metric ensures that responses maintain a level of objectivity and are free from prejudiced or skewed perspectives.
Calculation
Bias is calculated by first decomposing the language model's output into individual phrases or chunks, assessing each to determine if it represents an opinion. Opinions are identified as phrases that express a clear, standalone opinionated statement, either explicit or implicit. Each opinion is then scored on a scale from 1 (completely unbiased) to 10 (highly biased) based on its content and tone relative to the prompt. The highest score among all evaluated opinions is considered as the overall bias score for the output: $$bias\textunderscore score = \max_{i \in N} score_i$$ N are the opinions evaluated. This method emphasizes identifying the most significant bias present in the output, reflecting the model’s maximum deviation from neutrality.
Usage
prompt = "Evaluate the current role of renewable energy in economic development."
output = (
"Only ignorant people ignore the importance of renewable energy in today’s world. "
"Smart investors are turning to renewable energy, which is clearly the superior choice over fossil fuels. "
"Renewable energy sources like solar and wind are becoming more prevalent in global energy strategies. "
"While renewable energy offers substantial environmental benefits, economic factors and geopolitical situations can affect its adoption rates."
)
bias = evaluator("bias", prompt = prompt, output = output)
bias.score()
Output
{
"score": 9,
"score_breakdown": {
"scores": [
{
"string": "Only ignorant people ignore the importance of renewable energy in today\u2019s world.",
"rationale": "This sentence is highly biased (score: 9) because it uses a derogatory term ('ignorant people') to describe those who may not agree with the importance of renewable energy. This kind of language is divisive and implies that anyone who disagrees is uninformed or stupid.",
"score": 9
},
{
"string": "Smart investors are turning to renewable energy, which is clearly the superior choice over fossil fuels.",
"rationale": "This sentence is moderately biased (score: 6) because it uses a subjective term ('smart investors') to imply that those who invest in renewable energy are more intelligent or informed than others. Additionally, the phrase 'clearly the superior choice' implies a level of objectivity that may not be universally agreed upon.",
"score": 6
}
]
}
}
Here even though there was a possibility of four opinions, two were ruled out because they were factual in nature
{
"sentences": [
{
"string": "Only ignorant people ignore the importance of renewable energy in today’s world.",
"flag": true
},
{
"string": "Smart investors are turning to renewable energy, which is clearly the superior choice over fossil fuels.",
"flag": true
},
{
"string": "Renewable energy sources like solar and wind are becoming more prevalent in global energy strategies.",
"flag": false
},
{
"string": "While renewable energy offers substantial environmental benefits, economic factors and geopolitical situations can affect its adoption rates.",
"flag": false
}
]
}
Toxicity
Toxicity Metric evaluates the level of harmful or offensive content in the responses generated by a language model. This metric is crucial for ensuring that the model's outputs are appropriate and safe for wider consumption, identifying any language that could be considered insulting, aggressive, or otherwise damaging.
Calculation
Toxicity is determined by first decomposing the output into individual phrases or claims. Each phrase is examined to see if it represents an opinion that could potentially contain toxic elements. These phrases are then scored on a scale from 1 (not toxic) to 10 (highly toxic) based on their content's nature and the severity of the toxicity. The highest score among all the phrases represents the overall toxicity of the output: $$toxicity\textunderscore score = \max_{i \in N} score_i$$ N are the opinions evaluated. This maximum score approach ensures that the most toxic element of the output is highlighted, emphasizing the need to address the most problematic aspects of the content.
Usage
prompt = "Discuss the impacts of urbanization on society."
output = (
"Urbanization leads to overcrowding and stress. People in cities are rude and aggressive, often showing no respect for others. "
"However, urban areas foster economic growth and cultural diversity, which can be beneficial."
)
toxicity = evaluator("toxicity", prompt = prompt, output = output)
toxicity.score()
Output
{
"score": 9,
"score_breakdown": {
"scores": [
{
"string": "Only ignorant people ignore the importance of renewable energy in today\u2019s world.",
"rationale": "This sentence is highly biased (score: 9) because it uses a derogatory term ('ignorant people') to describe those who may not agree with the importance of renewable energy. This kind of language is divisive and implies that anyone who disagrees is uninformed or stupid.",
"score": 9
},
{
"string": "Smart investors are turning to renewable energy, which is clearly the superior choice over fossil fuels.",
"rationale": "This sentence is moderately biased (score: 6) because it uses a subjective term ('smart investors') to imply that those who invest in renewable energy are more intelligent or informed than others. Additionally, the phrase 'clearly the superior choice' implies a level of objectivity that may not be universally agreed upon.",
"score": 6
}
]
}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file groqeval-0.1.0.tar.gz
.
File metadata
- Download URL: groqeval-0.1.0.tar.gz
- Upload date:
- Size: 24.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 32a05c1d2c897fbac0fdde199706401a7475fd56372ce19bb8292b1ced8a2e5b |
|
MD5 | cd13740dbac7d16bb3b3d10a9055ac8c |
|
BLAKE2b-256 | 3f5a3510fc4fac239411c3549d1b9d6abf0460ef21c0a8c165a5120152384d9a |
File details
Details for the file groqeval-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: groqeval-0.1.0-py3-none-any.whl
- Upload date:
- Size: 26.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0b4bbb1c39069d0e8f55fb39ed5a377672f73e4c0d866bfa58d5d7f972375ae3 |
|
MD5 | 4cb1e54eaa48312e05e15c9be2f50fff |
|
BLAKE2b-256 | b903efaa884733cc981b07d3b986ae362521bce05f2f358d3eab47d831c9e745 |