RAGA AI CATALYST
Project description
RagaAI Catalyst
RagaAI Catalyst is a powerful tool for managing and optimizing LLM projects. It provides functionalities for project management, trace recording, and experiment management, allowing you to fine-tune and evaluate your LLM applications effectively.
Table of Contents
Installation
To install RagaAI Catalyst, you can use pip:
pip install ragaai-catalyst
Configuration
Before using RagaAI Catalyst, you need to set up your credentials. You can do this by setting environment variables or passing them directly to the RagaAICatalyst
class:
from ragaai_catalyst import RagaAICatalyst
catalyst = RagaAICatalyst(
access_key="YOUR_ACCESS_KEY",
secret_key="YOUR_SECRET_KEY",
base_url="BASE_URL"
)
Note: Authetication to RagaAICatalyst is necessary to perform any operations below
Usage
Project Management
Create and manage projects using RagaAI Catalyst:
# Create a project
project = catalyst.create_project(
project_name="Test-RAG-App-1",
usecase="Chatbot"
)
# Get project usecases
catalyst.project_use_cases()
# List projects
projects = catalyst.list_projects()
print(projects)
Dataset Management
Manage datasets efficiently for your projects:
from ragaai_catalyst import Dataset
# Initialize Dataset management for a specific project
dataset_manager = Dataset(project_name="project_name")
# List existing datasets
datasets = dataset_manager.list_datasets()
print("Existing Datasets:", datasets)
# Create a dataset from CSV
dataset_manager.create_from_csv(
csv_path='path/to/your.csv',
dataset_name='MyDataset',
schema_mapping={'column1': 'schema_element1', 'column2': 'schema_element2'}
)
# Get project schema mapping
dataset_manager.get_schema_mapping()
For more detailed information on Dataset Management, including CSV schema handling and advanced usage, please refer to the Dataset Management documentation.
Evaluation
Create and manage metric evaluation of your RAG application:
from ragaai_catalyst import Evaluation
# Create an experiment
evaluation = Evaluation(
project_name="Test-RAG-App-1",
dataset_name="MyDataset",
)
# Get list of available metrics
evaluation.list_metrics()
# Add metrics to the experiment
schema_mapping={
'Query': 'prompt',
'response': 'response',
'Context': 'context',
'expectedResponse': 'expected_response'
}
# Add single metric
evaluation.add_metrics(
metrics=[
{"name": "Faithfulness", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"gte": 0.232323}}, "column_name": "Faithfulness_v1", "schema_mapping": schema_mapping},
]
)
# Add multiple metrics
evaluation.add_metrics(
metrics=[
{"name": "Faithfulness", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"gte": 0.323}}, "column_name": "Faithfulness_gte", "schema_mapping": schema_mapping},
{"name": "Hallucination", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"lte": 0.323}}, "column_name": "Hallucination_lte", "schema_mapping": schema_mapping},
{"name": "Hallucination", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"eq": 0.323}}, "column_name": "Hallucination_eq", "schema_mapping": schema_mapping},
]
)
# Get the status of the experiment
status = evaluation.get_status()
print("Experiment Status:", status)
# Get the results of the experiment
results = evaluation.get_results()
print("Experiment Results:", results)
Trace Management
Record and analyze traces of your RAG application:
from ragaai_catalyst import Tracer
# Start a trace recording
tracer = Tracer(
project_name="Test-RAG-App-1",
dataset_name="tracer_dataset_name"
metadata={"key1": "value1", "key2": "value2"},
tracer_type="langchain",
pipeline={
"llm_model": "gpt-3.5-turbo",
"vector_store": "faiss",
"embed_model": "text-embedding-ada-002",
}
).start()
# Your code here
# Stop the trace recording
tracer.stop()
Prompt Management
Manage and use prompts efficiently in your projects:
from ragaai_catalyst import PromptManager
# Initialize PromptManager
prompt_manager = PromptManager(project_name="Test-RAG-App-1")
# List available prompts
prompts = prompt_manager.list_prompts()
print("Available prompts:", prompts)
# Get default prompt by prompt_name
prompt_name = "your_prompt_name"
prompt = prompt_manager.get_prompt(prompt_name)
# Get specific version of prompt by prompt_name and version
prompt_name = "your_prompt_name"
version = "v1"
prompt = prompt_manager.get_prompt(prompt_name,version)
# Get variables in a prompt
variable = prompt.get_variables()
print("variable:",variable)
# Get prompt content
prompt_content = prompt.get_prompt_content()
print("prompt_content:", prompt_content)
# Compile a prompt with variables
compiled_prompt = prompt.compile(query="What's the weather?", context="sunny", llm_response="It's sunny today")
print("Compiled prompt:", compiled_prompt)
# implement compiled_prompt with openai
import openai
def get_openai_response(prompt):
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=prompt
)
return response.choices[0].message.content
openai_response = get_openai_response(compiled_prompt)
print("openai_response:", openai_response)
# implement compiled_prompt with litellm
import litellm
def get_litellm_response(prompt):
response = litellm.completion(
model="gpt-4o-mini",
messages=prompt
)
return response.choices[0].message.content
litellm_response = get_litellm_response(compiled_prompt)
print("litellm_response:", litellm_response)
For more detailed information on Prompt Management, please refer to the Prompt Management documentation.
Synthetic Data Generation
from ragaai_catalyst import SyntheticDataGeneration
# Initialize Synthetic Data Generation
sdg = SyntheticDataGeneration()
# Process your file
text = sdg.process_document(input_data="file_path")
# Generate results
result = sdg.generate_qna(text, question_type ='simple',model_config={"provider":"openai","model":"gpt-4o-mini"},n=20)
# Get supported Q&A types
sdg.get_supported_qna()
# Get supported providers
sdg.get_supported_providers()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ragaai_catalyst-2.0.6b0.tar.gz
.
File metadata
- Download URL: ragaai_catalyst-2.0.6b0.tar.gz
- Upload date:
- Size: 46.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cec1a891b057db0d5f14792165570b1ddc2941d94ed28a5026376ba21eb36741 |
|
MD5 | 6e3ce718c6c16b05d6c9776d9868b1be |
|
BLAKE2b-256 | bd5fbb3c091b6575f4f79bbe9a61483c48386287cff53c3ee31da6d3dae31a52 |
File details
Details for the file ragaai_catalyst-2.0.6b0-py3-none-any.whl
.
File metadata
- Download URL: ragaai_catalyst-2.0.6b0-py3-none-any.whl
- Upload date:
- Size: 48.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fdc154fd0f03d4cf2eca9c34a76e5f59a42ccddf5db7e559ffa28305aee59f65 |
|
MD5 | e98dfc3c434d99c239fb2cfbc11f7359 |
|
BLAKE2b-256 | c492b7ab0e26b0af31f1816652be013a67f81e8e70a38e74f8e11aaac326e104 |