Weavel, automated prompt engineering and observability for LLM applications
Project description
Installation
pip install weavel
Documentation
You can find our full documentation here.
How to use
Option 1: Using OpenAI wrapper
from weavel import WeavelOpenAI as OpenAI
openai = OpenAI()
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello, world!"}
],
headers={
"generation_name": "hello",
}
)
Option 2: Logging inputs/outputs of LLM calls
from weavel import Weavel
from openai import OpenAI
from pydantic import BaseModel
openai = OpenAI()
# initialize Weavel
weavel = Weavel()
class Answer(BaseModel):
reasoning: str
answer: str
question = "What is x if x + 2 = 4?"
response = openai.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
messages=[
{"role": "system", "content": "You are a math teacher."},
{"role": "user", "content": question}
],
response_format=Answer
).choices[0].message.parsed
# log the generation
weavel.generation(
name="solve-math", # optional
inputs={"question": question},
outputs=response.model_dump()
)
Option 3 (Advanced Usage): OTEL-compatible trace logging
from weavel import Weavel
weavel = Weavel()
session = weavel.session(user_id = "UNIQUE_USER_ID")
session.message(
role="user",
content="Nice to meet you!"
)
session.track(
name="Main Page Viewed"
)
trace = session.trace(
name="retrieval_module"
)
trace.log(
name="google_search"
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
weavel-1.11.0.tar.gz
(28.0 kB
view hashes)
Built Distribution
weavel-1.11.0-py3-none-any.whl
(32.7 kB
view hashes)