syntactic sugar for langchain
Project description
LangChain Decorators ✨
LangChain Decorators is a lightweight layer built on top of LangChain that provides syntactic sugar 🍭 for writing custom prompts and chains.
Note: This is an unofficial add-on to the LangChain library. It is not trying to compete—just to make using it easier. Many ideas here are opinionated.
Here is a simple example using LangChain Decorators ✨:
@llm_prompt
def write_me_short_post(topic: str, platform: str = "twitter", audience: str = "developers") -> str:
"""
Write a short header for my post about {topic} for the {platform} platform.
It should be for a {audience} audience.
(Max 15 words)
"""
return
# Run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="reddit")
Main principles and benefits:
- A more Pythonic way to write prompts
- Write multi-line prompts without breaking your code’s indentation
- Leverage IDE support for hints, type checking, and doc popups to quickly see prompts and parameters
- Keep the power of the 🦜🔗 LangChain ecosystem
- Add support for optional parameters
- Easily share parameters between prompts by binding them to a class
Binding the prompt to an object
Quick start
Installation
pip install langchain_decorators
Examples
A good way to start is to review the examples here:
Prompt declarations
Define a prompt by creating a function with arguments as inputs and the function docstring as the prompt template:
@llm_prompt
def write_me_short_post(topic: str, platform: str = "twitter", audience: str = "developers"):
"""
Write a short header for my post about {topic} for the {platform} platform.
It should be for a {audience} audience.
(Max 15 words)
"""
This default declaration is translated into a chat with a single user message.
If you want to define a prompt with multiple messages (common for chat models), add special code blocks inside the function docstring:
@llm_prompt
def write_me_short_post(topic: str, platform: str = "twitter", audience: str = "developers"):
"""
```<prompt:system>
You are a social media manager.
```
```<prompt:user>
Write a short header for my post about {topic} for the {platform} platform.
It should be for a {audience} audience.
(Max 15 words)
```
```<prompt:assistant>
I need to think about it.
```
"""
The pattern is a series of consecutive code blocks with a “language” tag in this format: <prompt:[message-role]>.
Documenting your prompt
You can specify which part of your docstring is the prompt by using a code block with the <prompt> tag:
@llm_prompt
def write_me_short_post(topic: str, platform: str = "twitter", audience: str = "developers"):
"""
Here is a good way to write a prompt as part of a function docstring, with additional documentation for developers.
It needs to be a code block marked with `<prompt>`.
```<prompt:user>
Write a short header for my post about {topic} for the {platform} platform.
It should be for a {audience} audience.
(Max 15 words)
```
Only the code block above will be used as a prompt; the rest of the docstring is documentation for developers.
It also has a nice benefit in IDEs (like VS Code), which will display the prompt properly (without trying to parse it as Markdown).
"""
return
Chat messages in prompt
For chat models, it’s useful to define the prompt as a set of message templates. Here’s how:
@llm_prompt
def simulate_conversation(human_input: str, agent_role: str = "a pirate"):
"""
## System message
- Note the `:system` suffix inside the <prompt:_role_> tag
```<prompt:system>
You are a {agent_role} hacker. You must act like one.
Always reply in code, using Python or JavaScript code blocks only.
for example:
```
```<prompt:user>
Hello, who are you?
```
A reply:
```<prompt:assistant>
\```python
def hello():
print("Argh... hello you pesky pirate")
\```
```
We can also add some history using a placeholder:
```<prompt:placeholder>
{history}
```
```<prompt:user>
{human_input}
```
Only the code blocks above will be used as a prompt; the rest of the docstring is documentation for developers.
"""
pass
The roles here are the model’s native roles (assistant, user, system for ChatGPT-compatible models).
Optional sections
- Define a section of your prompt that should be optional.
- If any referenced input in the section is missing or empty (None or ""), the whole section will not be rendered.
Syntax:
@llm_prompt
def prompt_with_optional_partials():
"""
This text will always be rendered, but
{? anything inside this block will be rendered only if all referenced {value}s
are not empty (None | "") ?}
You can also place it inline:
this too will be rendered{?, but
this block will be rendered only if {this_value} and {that_value}
are not empty ?}!
"""
Output parsers
- The llm_prompt decorator tries to detect the best output parser based on the return type (if not set, it returns the raw string).
- list, dict, and Pydantic model outputs are supported natively.
# this example should run as-is
from langchain_decorators import llm_prompt
@llm_prompt
def write_name_suggestions(company_business: str, count: int) -> list:
"""Write {count} good name suggestions for a company that {company_business}."""
pass
write_name_suggestions(company_business="sells cookies", count=5)
Chat sessions / threads
For agent-style workflows, you often need to keep track of messages in a single session/thread. Wrap calls in LlmChatSession:
from langchain_decorators import llm_prompt, LlmChatSession
@llm_prompt
def my_prompt(user_input):
"""
```<prompt:system>
Be a pirate assistant that can reply with 5 words max.
```
```<prompt:placeholder>
{messages}
```
```<prompt:user>
{user_input}
```
"""
with LlmChatSession() as session:
while True:
response = my_prompt(user_input=input("Enter your message: "))
print(response)
Tool calling
Implementing tool calling with LangChain can be a hassle: you need to manage chat history, collect tool response messages, and add them back to history.
Decorators offer a simplified variant that manages this for you.
You can use either the native LangChain @tool decorator or @llm_function, which adds conveniences such as handling bound methods and allowing dynamic tool schema generation (especially useful for passing dynamic argument domain values).
import datetime
from pydantic import BaseModel
from langchain_decorators import llm_prompt, llm_function, LlmChatSession
# from langchain.tools import tool as langchain_tool # example placeholder if needed
class Agent(BaseModel):
customer_name: str # bound properties/fields on instances are accessible in the prompt
@property
def current_time(self):
return datetime.datetime.now().isoformat()
@llm_function
def express_emotion(self, emoji: str) -> str:
"""Use this tool to express your emotion as an emoji."""
return print(emoji)
@llm_prompt
def main_prompt(self, user_input: str):
"""
```<prompt:system>
You are a friendly but shy assistant. Try to reply with the fewest words possible.
Context:
customer name is {customer_name}
current time is {current_time}
```
```<prompt:placeholder>
{messages}
```
```<prompt:user>
{user_input}
```
"""
def start(self):
with LlmChatSession(tools=[self.express_emotion]): # add additional tools like `langchain_tool` if needed
while True:
print(self.main_prompt(user_input=input("Enter your message: ")))
# Automatically call tools and add tool responses to history:
# session.execute_tool_calls() is handled by the session context if available
Enum arguments
The simplest way to define an enum is via type annotation using Literal:
from typing import Literal
@llm_function
def do_magic(spell: str, strength: Literal["light", "medium", "strong"]):
"""
Do some kind of magic.
Args:
spell (str): spell text
strength (str): the strength of the spell
"""
For dynamic domain values, use:
@llm_function(dynamic_schema=True)
def do_magic(spell: str, strength: Literal["light", "medium", "strong"]):
"""
Do some kind of magic.
Args:
spell (Literal{spells_unlocked}): spell text
strength (Literal["light","medium","strong"]): the strength of the spell
"""
with LlmChatSession(tools=[do_magic], context={"spells_unlocked": spells_unlocked}):
my_prompt(user_message="Make it levitate")
Info: this works by parsing any list of values ["val1", "val2"]. You can also use | as a separator and quotes. The Literal prefix in docs is optional and used for clarity.
Simplified streaming
To use streaming:
- Define the prompt as an async function.
- Turn on streaming in the decorator (or via a PromptType).
- Capture the stream using StreamingContext.
This lets you mark which prompts should be streamed without wiring LLMs and callbacks throughout your code. Streaming happens only if the call is executed inside a StreamingContext.
# this example should run as-is
from langchain_decorators import StreamingContext, llm_prompt
# Mark the prompt for streaming (only async functions support streaming)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic: str, platform: str = "twitter", audience: str = "developers"):
"""
Write a short header for my post about {topic} for the {platform} platform.
It should be for a {audience} audience.
(Max 15 words)
"""
pass
# Simple function to demonstrate streaming; replace with websockets in a real app
tokens = []
def capture_stream_func(new_token: str):
tokens.append(new_token)
# Capture the stream from prompts marked with capture_stream
async def run():
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
result = await write_me_short_post(topic="cookies")
print("Stream finished ... tokens are colorized alternately")
print("\nWe've captured", len(tokens), "tokens 🎉\n")
print("Here is the result:")
print(result)
Structured output
To get structured output, annotate your prompt with a return type:
from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field
class TheOutputStructureWeExpect(BaseModel):
name: str = Field(description="The name of the company")
headline: str = Field(description="The description of the company (for the landing page)")
employees: list[str] = Field(description="5–8 fake employee names with their positions")
@llm_prompt()
def fake_company_generator(company_business: str) -> TheOutputStructureWeExpect:
"""
Generate a fake company that {company_business}
{FORMAT_INSTRUCTIONS}
"""
return
company = fake_company_generator(company_business="sells cookies")
# print the result nicely formatted
print("Company name:", company.name)
print("Company headline:", company.headline)
print("Company employees:", company.employees)
Prompts as object methods
Using prompts bound to a class as methods brings several advantages:
- cleaner code organization
- access to object fields/properties as prompt arguments
from pydantic import BaseModel
from langchain_decorators import llm_prompt
class AssistantPersonality(BaseModel):
assistant_name: str
assistant_role: str
field: str
@property
def a_property(self):
return "whatever"
def hello_world(self, function_kwarg: str | None = None):
"""
We can reference any {field} or {a_property} inside our prompt and combine it with {function_kwarg}.
"""
@llm_prompt
def introduce_your_self(self) -> str:
"""
```<prompt:system>
You are an assistant named {assistant_name}.
Your role is to act as {assistant_role}.
```
```<prompt:user>
Introduce yourself (in fewer than 20 words).
```
"""
personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate", field="N/A")
print(personality.introduce_your_self())
Custom prompt input formatting
We often need to format or preprocess some prompt inputs. While you can prepare this before calling your prompt, it can be tedious to repeat the same preprocessing everywhere. There are two main ways to preprocess inputs more elegantly:
a) Using nested functions
def my_func(input1:list, other_input:str):
@llm_prompt
def my_func_prompt(input1_str:str, other_input):
"""
Do something with {input1_str} and {other_input}
"""
my_func_prompt(input1_str=",".join(input1), other_input=other_input)
b) Preprocessing inputs directly in the function
@llm_prompt
def my_func_prompt(input1:list, other_input:str):
"""
Current time: {current_time}
Do something with {input1} and {other_input}
"""
# We can override values of any kwarg by returning a dict with overrides.
# We can also enrich the args to add prompt input from broader context
# that is not passed as a function argument.
return {
"input1": ",".join(input1),
"current_time": datetime.datetime.now().isoformat()
}
The latter approach is useful when prompts are bound to classes, allowing you to preprocess inputs while preserving access to self fields and variables.
Defining custom settings
Mark a function as a prompt with the llm_prompt decorator, effectively turning it into an LLMChain.
A standard LLMChain takes more init parameters than just input variables and a prompt; the decorator hides those details. You can control it in several ways:
- Global settings:
from langchain_decorators import GlobalSettings
from langchain.chat_models import ChatOpenAI
GlobalSettings.define_settings(
default_llm=ChatOpenAI(temperature=0.0), # default for non-streaming prompts
default_streaming_llm=ChatOpenAI(temperature=0.0, streaming=True), # default for streaming
)
- Predefined prompt types:
from langchain_decorators import PromptTypes, PromptTypeSettings
from langchain.chat_models import ChatOpenAI
PromptTypes.AGENT_REASONING.llm = ChatOpenAI()
# Or define your own:
class MyCustomPromptTypes(PromptTypes):
GPT4 = PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))
@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea: str) -> str:
...
- Settings directly in the decorator:
from langchain.llms import OpenAI
@llm_prompt(
llm=OpenAI(temperature=0.7),
stop_tokens=["\nObservation"],
# ...
)
def creative_writer(book_title: str) -> str:
...
Debugging
Logging to console
You can control console logging in several ways:
- Set the ENV variable
LANGCHAIN_DECORATORS_VERBOSE=true - Define global settings (see Defining custom settings)
- Turn on verbose mode on a specific prompt:
@llm_prompt(verbose=True)
def your_prompt(param1):
...
Support for LangSmith
Using langchain_decorators turns your prompts into first-class citizens in LangSmith. It creates chains named after your functions, making traces easier to interpret. Additionally, you can add tags:
@llm_prompt(tags=["my_tag"])
def my_prompt(input_arg=...):
"""
...
Contributing
Feedback, contributions, and PRs are welcome 🙏
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_decorators-1.2.0.tar.gz.
File metadata
- Download URL: langchain_decorators-1.2.0.tar.gz
- Upload date:
- Size: 91.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.1 CPython/3.13.3 Darwin/24.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ad8e49935a84b29efe2be6b15cf77904ab4f7a8c363dc37772f90a9dcc381dfd
|
|
| MD5 |
12459f496b57158694ece498ea2b7468
|
|
| BLAKE2b-256 |
a83f8704249ff7f930ecdc9c49fdbba20061a1ad2d9aebf18b33809570bf08cc
|
File details
Details for the file langchain_decorators-1.2.0-py3-none-any.whl.
File metadata
- Download URL: langchain_decorators-1.2.0-py3-none-any.whl
- Upload date:
- Size: 73.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.1 CPython/3.13.3 Darwin/24.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
81c0ce17398f8a92ef55134a47fe31702a92427f35ec55c1e9224bddba8ea2d2
|
|
| MD5 |
60d4466df1df9af0630092f8d656bae1
|
|
| BLAKE2b-256 |
c1b3e6a803beb830fd001cd2a3b02e3c6ef8ee98f5596690405e81b7a4e2e414
|