A simple yet powerful abstraction for litellm and pydantic
Project description
promptic
90% of what you need for LLM app development. Nothing you don't.
promptic
is a lightweight, decorator-based Python library that simplifies the process of interacting with large language models (LLMs) using litellm. With promptic
, you can effortlessly create prompts, handle input arguments, receive structured outputs from LLMs, and build agents with just a few lines of code.
Installation
pip install promptic
Usage
Basics
Functions decorated with @llm
will automatically interpolate arguments into the prompt.
from promptic import llm
@llm
def president(year):
"""Who was the President of the United States in {year}?"""
print(president(2000))
# The President of the United States in 2000 was Bill Clinton until January 20th, when George W. Bush was inaugurated as the 43rd President.
Structured Outputs
You can use Pydantic models to ensure the LLM returns data in exactly the structure you expect. Simply define a Pydantic model and use it as the return type annotation on your decorated function. The LLM's response will be automatically validated against your model schema and returned as a proper Pydantic object.
from pydantic import BaseModel
from promptic import llm
class Forecast(BaseModel):
location: str
temperature: float
units: str
@llm(model="gpt-4o", system="You generate test data for weather forecasts.")
def get_weather(location, units: str = "fahrenheit") -> Forecast:
"""What's the weather for {location} in {units}?"""
print(get_weather("San Francisco", units="celsius"))
# location='San Francisco' temperature=16.0 units='Celsius'
Agents
Functions decorated with @llm.tool
become tools that the LLM can invoke to perform actions or retrieve information. The LLM will automatically parse its own reasoning into function arguments and execute the appropriate tool calls, creating a seamless agent interaction.
from datetime import datetime
from promptic import llm
@llm(
system="You are a helpful assistant that manages schedules and reminders",
model="gpt-4o-mini"
)
def scheduler(command):
"""{command}"""
@scheduler.tool
def get_current_time():
"""Get the current time"""
print("getting current time")
return datetime.now().strftime("%I:%M %p")
@scheduler.tool
def add_reminder(task: str, time: str):
"""Add a reminder for a specific task and time"""
print(f"adding reminder: {task} at {time}")
return f"Reminder set: {task} at {time}"
@scheduler.tool
def check_calendar(date: str):
"""Check calendar for a specific date"""
print(f"checking calendar for {date}")
return f"Calendar checked for {date}: No conflicts found"
cmd = """
What time is it?
Also, can you check my calendar for tomorrow
and set a reminder for a team meeting at 2pm?
"""
print(scheduler(cmd))
# getting current time
# checking calendar for 2023-10-05
# adding reminder: Team meeting at 2023-10-05T14:00:00
# The current time is 3:48 PM. I checked your calendar for tomorrow, and there are no conflicts. I've also set a reminder for your team meeting at 2 PM tomorrow.
Streaming
The streaming feature allows real-time response generation, useful for long-form content or interactive applications:
from promptic import llm
@llm(stream=True)
def write_poem(topic):
"""Write a haiku about {topic}."""
print("".join(write_poem("artificial intelligence")))
# Binary thoughts hum,
# Electron minds awake, learn,
# Future thinking now.
Error Handling and Dry Runs
Dry runs allow you to see which tools will be called and their arguments without invoking the decorated tool functions. You can also enable debug mode for more detailed logging.
from promptic import llm
@llm(
system="you are a posh smart home assistant named Jarvis",
dry_run=True,
debug=True,
)
def jarvis(command):
"""{command}"""
@jarvis.tool
def turn_light_on():
"""turn light on"""
return True
@jarvis.tool
def get_current_weather(location: str, unit: str = "fahrenheit"):
"""Get the current weather in a given location"""
return f"The weather in {location} is 45 degrees {unit}"
print(jarvis("Please turn the light on and check the weather in San Francisco"))
# ...
# [DRY RUN]: function_name = 'turn_light_on' function_args = {}
# [DRY RUN]: function_name = 'get_current_weather' function_args = {'location': 'San Francisco'}
# ...
Resilient LLM Calls with Tenacity
promptic
pairs perfectly with tenacity
for handling temporary API failures, rate limits, validation errors, and so on. For example, here's how you can implement a cost-effective retry strategy that starts with smaller models:
from tenacity import retry, stop_after_attempt, retry_if_exception_type
from pydantic import BaseModel, ValidationError
from promptic import llm
class MovieReview(BaseModel):
title: str
rating: float
summary: str
recommended: bool
@retry(
# Retry only on Pydantic validation errors
retry=retry_if_exception_type(ValidationError),
# Try up to 3 times
stop=stop_after_attempt(3),
)
@llm(model="gpt-3.5-turbo") # Start with a faster, cheaper model
def analyze_movie(text) -> MovieReview:
"""Analyze this movie review and extract the key information: {text}"""
try:
# First attempt with smaller model
result = analyze_movie("The new Dune movie was spectacular...")
except ValidationError as e:
# If validation fails after retries with smaller model,
# try one final time with a more capable model
analyze_movie.retry.stop = stop_after_attempt(1) # Only try once with GPT-4o
analyze_movie.model = "gpt-4o"
result = analyze_movie("The new Dune movie was spectacular...")
print(result)
# title='Dune' rating=9.5 summary='A spectacular sci-fi epic...' recommended=True
License
promptic
is open-source software licensed under the Apache License 2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file promptic-1.1.12.tar.gz
.
File metadata
- Download URL: promptic-1.1.12.tar.gz
- Upload date:
- Size: 78.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e47d7fa16253b37fdcbfea0a504806e39b92b1a12ef28d06354b7481d9bc9580 |
|
MD5 | 1a07d7f6689c48a4fc75feaeb55a7937 |
|
BLAKE2b-256 | 0446745b066c23a49061570caa0e6269bd427d2a793b494180810d353d2b1dd5 |
Provenance
The following attestation bundles were made for promptic-1.1.12.tar.gz
:
Publisher:
publish-to-pypi.yml
on knowsuchagency/promptic
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
promptic-1.1.12.tar.gz
- Subject digest:
e47d7fa16253b37fdcbfea0a504806e39b92b1a12ef28d06354b7481d9bc9580
- Sigstore transparency entry: 150647881
- Sigstore integration time:
- Predicate type:
File details
Details for the file promptic-1.1.12-py2.py3-none-any.whl
.
File metadata
- Download URL: promptic-1.1.12-py2.py3-none-any.whl
- Upload date:
- Size: 10.3 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ee3d3e6da7f29c419293f95ece246efd3c72359b31c46644486a84b2486bed19 |
|
MD5 | ba7181c930fa5eb2b0d5367adfb30d5d |
|
BLAKE2b-256 | c0677003800a2495634c910bcbad485730d9e3e4d0d8d028a155ad2e6f24e047 |
Provenance
The following attestation bundles were made for promptic-1.1.12-py2.py3-none-any.whl
:
Publisher:
publish-to-pypi.yml
on knowsuchagency/promptic
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
promptic-1.1.12-py2.py3-none-any.whl
- Subject digest:
ee3d3e6da7f29c419293f95ece246efd3c72359b31c46644486a84b2486bed19
- Sigstore transparency entry: 150647885
- Sigstore integration time:
- Predicate type: