Skip to main content

A simple yet powerful abstraction for litellm and pydantic

Project description

promptic

90% of what you need for LLM app development. Nothing you don't.

promptic is a lightweight, decorator-based Python library that simplifies the process of interacting with large language models (LLMs) using litellm. With promptic, you can effortlessly create prompts, handle input arguments, receive structured outputs from LLMs, and build agents with just a few lines of code.

Installation

pip install promptic

Usage

Basics

Functions decorated with @llm will automatically interpolate arguments into the prompt.

from promptic import llm

@llm
def president(year):
    """Who was the President of the United States in {year}?"""

print(president(2000))
# The President of the United States in 2000 was Bill Clinton until January 20th, when George W. Bush was inaugurated as the 43rd President.

Structured Outputs

You can use Pydantic models to ensure the LLM returns data in exactly the structure you expect. Simply define a Pydantic model and use it as the return type annotation on your decorated function. The LLM's response will be automatically validated against your model schema and returned as a proper Pydantic object.

from pydantic import BaseModel
from promptic import llm

class Forecast(BaseModel):
    location: str
    temperature: float
    units: str

@llm(model="gpt-4o", system="You generate test data for weather forecasts.")
def get_weather(location, units: str = "fahrenheit") -> Forecast:
    """What's the weather for {location} in {units}?"""

print(get_weather("San Francisco", units="celsius"))
# location='San Francisco' temperature=16.0 units='Celsius'

Agents

Decorate functions with @llm.tool to register them as tools that the LLM can use to perform actions or retrieve information. The agent will automatically call the decorated function with the arguments extracted from the LLM's response.

from datetime import datetime

from promptic import llm

@llm(
    system="You are a helpful assistant that manages schedules and reminders",
    model="gpt-4o-mini"
)
def scheduler(command):
    """{command}"""

@scheduler.tool
def get_current_time():
    """Get the current time"""
    print("getting current time")
    return datetime.now().strftime("%I:%M %p")

@scheduler.tool
def add_reminder(task: str, time: str):
    """Add a reminder for a specific task and time"""
    print(f"adding reminder: {task} at {time}")
    return f"Reminder set: {task} at {time}"

@scheduler.tool
def check_calendar(date: str):
    """Check calendar for a specific date"""
    print(f"checking calendar for {date}")
    return f"Calendar checked for {date}: No conflicts found"

cmd = """
What time is it? 
Also, can you check my calendar for tomorrow 
and set a reminder for a team meeting at 2pm?
"""

print(scheduler(cmd))
# getting current time
# checking calendar for 2023-10-05
# adding reminder: Team meeting at 2023-10-05T14:00:00
# The current time is 3:48 PM. I checked your calendar for tomorrow, and there are no conflicts. I've also set a reminder for your team meeting at 2 PM tomorrow.

Streaming

The streaming feature allows real-time response generation, useful for long-form content or interactive applications:

from promptic import llm

@llm(stream=True)
def write_poem(topic):
    """Write a haiku about {topic}."""

print("".join(write_poem("artificial intelligence")))
# Binary thoughts hum,
# Electron minds awake, learn,
# Future thinking now.

Error Handling and Dry Runs

Dry runs allow you to see which tools will be called and their arguments without invoking the decorated tool functions. You can also enable debug mode for more detailed logging.

from promptic import llm

@llm(
    system="you are a posh smart home assistant named Jarvis",
    dry_run=True,
    debug=True,
)
def jarvis(command):
    """{command}"""

@jarvis.tool
def turn_light_on():
    """turn light on"""
    return True

@jarvis.tool
def get_current_weather(location: str, unit: str = "fahrenheit"):
    """Get the current weather in a given location"""
    return f"The weather in {location} is 45 degrees {unit}"

print(jarvis("Please turn the light on and check the weather in San Francisco"))
# ...
# [DRY RUN]: function_name = 'turn_light_on' function_args = {}
# [DRY RUN]: function_name = 'get_current_weather' function_args = {'location': 'San Francisco'}
# ...

Resilient LLM Calls with Tenacity

promptic pairs perfectly with tenacity for handling temporary API failures, rate limits, validation errors, and so on. For example, here's how you can implement a cost-effective retry strategy that starts with smaller models:

from tenacity import retry, stop_after_attempt, retry_if_exception_type
from pydantic import BaseModel, ValidationError
from promptic import llm

class MovieReview(BaseModel):
    title: str
    rating: float
    summary: str
    recommended: bool

@retry(
    # Retry only on Pydantic validation errors
    retry=retry_if_exception_type(ValidationError),
    # Try up to 3 times
    stop=stop_after_attempt(3),
)
@llm(model="gpt-3.5-turbo")  # Start with a faster, cheaper model
def analyze_movie(text) -> MovieReview:
    """Analyze this movie review and extract the key information: {text}"""

try:
    # First attempt with smaller model
    result = analyze_movie("The new Dune movie was spectacular...")
except ValidationError as e:
    # If validation fails after retries with smaller model, 
    # try one final time with a more capable model
    analyze_movie.retry.stop = stop_after_attempt(1)  # Only try once with GPT-4o
    analyze_movie.model = "gpt-4o"
    result = analyze_movie("The new Dune movie was spectacular...")

print(result)
# title='Dune' rating=9.5 summary='A spectacular sci-fi epic...' recommended=True

License

promptic is open-source software licensed under the Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptic-1.1.11.tar.gz (78.0 kB view details)

Uploaded Source

Built Distribution

promptic-1.1.11-py2.py3-none-any.whl (10.2 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file promptic-1.1.11.tar.gz.

File metadata

  • Download URL: promptic-1.1.11.tar.gz
  • Upload date:
  • Size: 78.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for promptic-1.1.11.tar.gz
Algorithm Hash digest
SHA256 7d2d72554935f9d8feb34bd78fc9cbf3e685d52a2ee6b4a78c26daa46fa97384
MD5 759a1ef79544f5d3ca12885bcd00a26c
BLAKE2b-256 1442404ee3f9234b6a2d621f65197f11529dcbd3a50b1bd94e911fc93866dffd

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptic-1.1.11.tar.gz:

Publisher: publish-to-pypi.yml on knowsuchagency/promptic

Attestations:

File details

Details for the file promptic-1.1.11-py2.py3-none-any.whl.

File metadata

  • Download URL: promptic-1.1.11-py2.py3-none-any.whl
  • Upload date:
  • Size: 10.2 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for promptic-1.1.11-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 9cde2c759208c3f413092164a6c83e6f19d1d6892344bf8fe49469c2874fd86f
MD5 6cfbe0fe16f3dc6aec9040b7b3dbf8e9
BLAKE2b-256 60b99811b8104dd1a35b346649925fa18649a91ad8a42d4e3460a27b8ac5d905

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptic-1.1.11-py2.py3-none-any.whl:

Publisher: publish-to-pypi.yml on knowsuchagency/promptic

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page