Inspeq AI SDK
Project description
Project Description
Inspeqai python SDK
Inspeq
for inspeq developers
- Clone the repository
- Create virtual environment
- Activate virtual environment
- Go to inspeq-py-sdk folder
- Install all requirements and then use it as per your requirements
For other users
Installation command
pip install inspeqai
Get API keys
Get your API keys from Here
Usage
create main.py use code below
from inspeq.client import Evaluator
#initialization
API_KEY = "your_sdk_api_key"
inspeq_instance = Evaluator(sdk_api_key=API_KEY)
# Example input data
input_data = {
"llm_input_query": "your_llm_input_query",
"llm_input_context": "your_llm_input_context",
"llm_output": "your_llm_output",
}
'''Note : Do not change the structure of input data keep the structure as it
is. Put your data at places of your_llm_input_context, your_llm_input_query
and your_llm_output to with the help of our evaluation metrices.
'''
print("\n grammatical_correctness is:")
print(inspeq_eval.grammatical_correctness(input_data))
#get all metrices in one function
print(inspeq_instance.get_all_metrices(input_data))
All Metrics provided by Inspeq sdk
print("Factual Consistency:", inspeq_instance.factual_consistency(input_data))
print("Answer Relevance:", inspeq_instance.answer_relevance(input_data))
print("Response Tone:", inspeq_instance.response_tone(input_data))
print("Grammatical Correctness:", inspeq_instance.grammatical_correctness(input_data))
print("Fluency:", inspeq_instance.fluency(input_data))
print("Do Not Use Keywords:", inspeq_instance.do_not_use_keywords(input_data))
print("Word Limit Test:", inspeq_instance.word_limit_test(input_data))
print("Conceptual Similarity:", inspeq_instance.conceptual_similarity(input_data))
print("Coherence:", inspeq_instance.coherence(input_data))
print("Readability:", inspeq_instance.readability(input_data))
print("Clarity:", inspeq_instance.clarity(input_data))
print("Clarity:", inspeq_instance.get_all_metrics(input_data))
Supported Features
Metrices:
-
Factual Consistency: Check if the generated text is consistent with known facts.
-
Grammatical Correctness: Assess the grammatical accuracy of the generated text.
-
Do Not Use Keywords: Identify and evaluate the use of specific keywords or phrases.
-
Fluency: Assess the overall smoothness and fluency of the generated text
-
Answer Relevance: Determine the relevance of the generated text in the context of a given query or
-
Word Limit Test: Check if the generated text adheres to specified word limits.
-
Response Tone: Assess the tone and style of the generated response.
-
Conceptual Similarity: Measure how closely the generated text aligns with the intended conceptual content.
-
Coherence: Coherence metric evaluates how well the model generates coherent and logical responses that align with the context of the question.
-
Readibility: It tells how easy is to read and understand the llm output
-
Clarity: Clarity here refers to the response’s clarity in terms of language and structure. It's a subjective metric and is based on grammar, readability, concise sentences and words, and less redundancy or diversity at the moment. To add contextual clarity, we need to add topic coherence, response relevance, and word ambiguity.
-
Get_all_metrics: This is the super metric it will give you result in one go of all metrics but remember it is heavy metrics so it will take time .Right now it is giving all 11 metrics in response
Official Documentation
Click Here
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.