Parea python sdk
Project description
parea-sdk
Installation
pip install -U parea-ai
or install with Poetry
poetry add parea-ai
Debugging Chains & Agents
You can iterate on your chains & agents much faster by using a local cache. This will allow you to make changes to your code & prompts without waiting for all previous, valid LLM responses. Simply add these two lines to the beginning your code and start a local redis cache:
from parea import init, RedisCache
init(cache=RedisCache())
Above will use the default redis cache at localhost:6379
with no password. You can also specify your redis database by:
from parea import init, RedisCache
cache = RedisCache(
host=os.getenv("REDIS_HOST", "localhost"), # default value
port=int(os.getenv("REDIS_PORT", 6379)), # default value
password=os.getenv("REDIS_PASSWORT", None) # default value
)
init(cache=cache)
If you set cache = None
for init
, no cache will be used.
Benchmark your LLM app across many inputs
You can benchmark your LLM app across many inputs by using the benchmark
command. This will run your the entry point
of your app with the specified inputs and create a report with the results.
parea benchmark --func app:main --csv_path benchmark.csv
The CSV file will be used to fill in the arguments to your function. The report will be a CSV file of all the traces. If you set your Parea API key, the traces will also be logged to the Parea dashboard. Note, for this feature you need to have a redis cache running. Please, raise a GitHub issue if you would like to use this feature without a redis cache.
Automatically log all your LLM call traces
You can automatically log all your LLM traces to the Parea dashboard by setting the PAREA_API_KEY
environment variable or specifying it in the init
function.
This will help you debug issues your customers are facing by stepping through the LLM call traces and recreating the issue
in your local setup & code.
from parea import init
init(
api_key=os.getenv("PAREA_API_KEY"), # default value
cache=...
)
Use a deployed prompt
import os
from dotenv import load_dotenv
from parea import Parea
from parea.schemas.models import Completion, UseDeployedPrompt, CompletionResponse, UseDeployedPromptResponse
load_dotenv()
p = Parea(api_key=os.getenv("PAREA_API_KEY"))
# You will find this deployment_id in the Parea dashboard
deployment_id = '<DEPLOYMENT_ID>'
# Assuming your deployed prompt's message is:
# {"role": "user", "content": "Write a hello world program using {{x}} and the {{y}} framework."}
inputs = {"x": "Golang", "y": "Fiber"}
# You can easily unpack a dictionary into an attrs class
test_completion = Completion(
**{
"deployment_id": deployment_id,
"llm_inputs": inputs,
"metadata": {"purpose": "testing"}
}
)
# By passing in my inputs, in addition to the raw message with unfilled variables {{x}} and {{y}},
# you we will also get the filled-in prompt:
# {"role": "user", "content": "Write a hello world program using Golang and the Fiber framework."}
test_get_prompt = UseDeployedPrompt(deployment_id=deployment_id, llm_inputs=inputs)
def main():
completion_response: CompletionResponse = p.completion(data=test_completion)
print(completion_response)
deployed_prompt: UseDeployedPromptResponse = p.get_prompt(data=test_get_prompt)
print("\n\n")
print(deployed_prompt)
async def main_async():
completion_response: CompletionResponse = await p.acompletion(data=test_completion)
print(completion_response)
deployed_prompt: UseDeployedPromptResponse = await p.aget_prompt(data=test_get_prompt)
print("\n\n")
print(deployed_prompt)
Logging results from LLM providers [Example]
import os
import openai
from dotenv import load_dotenv
from parea import Parea
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
p = Parea(api_key=os.getenv("PAREA_API_KEY"))
x = "Golang"
y = "Fiber"
messages = [{
"role": "user",
"content": f"Write a hello world program using {x} and the {y} framework."
}]
model = "gpt-3.5-turbo"
temperature = 0.0
# define your OpenAI call as you would normally and we'll automatically log the results
def main():
openai.ChatCompletion.create(model=model, temperature=temperature, messages=messages).choices[0].message["content"]
Open source community features
Ready-to-use Pull Requests templates and several Issue templates.
- Files such as:
LICENSE
,CONTRIBUTING.md
,CODE_OF_CONDUCT.md
, andSECURITY.md
are generated automatically. - Semantic Versions specification
with
Release Drafter
.
🛡 License
This project is licensed under the terms of the Apache Software License 2.0
license.
See LICENSE for more details.
📃 Citation
@misc{parea-sdk,
author = {joel-parea-ai},
title = {Parea python sdk},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/parea-ai/parea-sdk}}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for parea_ai-0.2.12-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 40558355625642dd71c6e2a5965430df4f6884ca20b156993f0a697cf19421b2 |
|
MD5 | 7afd133f9775b0f305b586229e8e278a |
|
BLAKE2b-256 | 17a0df20e9b4d85354cfb6a8e98e808eb4f57cad7e1810ae078696cfda271d3c |