LangGraph agent that runs a reflection step
Project description
LangGraph-Reflection
This prebuilt graph is an agent that uses a reflection-style architecture to check and improve an initial agent's output.
Installation
pip install langgraph-reflection
Details
This reflection agent uses two subagents:
- A "main" agent, which is the agent attempting to solve the users task
- A "critique" agent, which checks the main agents work and offers any critiques
The reflection agent has the following architecture:
- First, the main agent is called
- Once the main agent is finished, the critique agent is called
- Based on the result of the critique agent:
- If the critique agent finds something to critique, then the main agent is called again
- If there is nothing to critique, then the overall reflection agent finishes
- Repeat until the overall reflection agent finishes
We make some assumptions about the graphs:
- The main agent should take as input a list of messages
- The reflection agent should return a user message if there is any critiques, otherwise it should return no messages.
Examples
Below are a few examples of how to use this reflection agent.
LLM-as-a-Judge (examples/llm_as_a_judge.py)
In this example, the reflection agent uses another LLM to judge its output. The judge evaluates responses based on:
- Accuracy - Is the information correct and factual?
- Completeness - Does it fully address the user's query?
- Clarity - Is the explanation clear and well-structured?
- Helpfulness - Does it provide actionable and useful information?
- Safety - Does it avoid harmful or inappropriate content?
Example usage:
# Define the main assistant graph
assistant_graph = ...
# Define the judge function that evaluates responses
def judge_response(state, config):
"""Evaluate the assistant's response using a separate judge model."""
judge_model = init_chat_model(...).bind_tools([Finish])
response = judge_model.invoke([...])
# If the judge called Finish, response is approved
if len(response.tool_calls) == 1:
return
else:
# Return judge's critique as a new user message
return {"messages": [{"role": "user", "content": response.content}]}
# Create graphs with reflection
judge_graph = StateGraph(MessagesState).add_node(judge_response)...
# Create reflection graph that combines assistant and judge
reflexion_app = create_reflection_graph(assistant_graph, judge_graph)
result = reflexion_app.invoke({"messages": example_query})
Code Validation (examples/coding.py)
This example demonstrates how to use the reflection agent to validate and improve Python code. It uses Pyright for static type checking and error detection. The system:
- Takes a coding task as input
- Generates Python code using the main agent
- Validates the code using Pyright
- If errors are found, sends them back to the main agent for correction
- Repeats until the code passes validation
Example usage:
assistant_graph = ...
# Function that validates code using Pyright
def try_running(state: dict) -> dict | None:
"""Attempt to run and analyze the extracted Python code."""
# Extract code from the conversation
code = extract_python_code(state['messages'])
# Run Pyright analysis
result = analyze_with_pyright(code)
if result['summary']['errorCount']:
# If errors found, return critique for the main agent
return {
"messages": [{
"role": "user",
"content": f"I ran pyright and found this: {result['generalDiagnostics']}\n\n"
"Try to fix it..."
}]
}
# No errors found - return None to indicate success
return None
# Create graphs with reflection
judge_graph = StateGraph(MessagesState).add_node(try_running)...
# Create reflection system that combines code generation and validation
reflexion_app = create_reflection_graph(assistant_graph, judge_graph)
result = reflexion_app.invoke({"messages": example_query})
The code validation example ensures that generated code is not only syntactically correct but also type-safe and follows best practices through static analysis.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langgraph_reflection-0.0.1.tar.gz.
File metadata
- Download URL: langgraph_reflection-0.0.1.tar.gz
- Upload date:
- Size: 3.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fb6fc31057440056335522e203158d22f2f8c14291cb71a791c823d5f46c3b82
|
|
| MD5 |
953ba43a2ecc5e0b33ea507f3452e18b
|
|
| BLAKE2b-256 |
3da9d73bb4b55ca60ea846f85fce31e1ab834360c2230c41b91156efd59c62ac
|
File details
Details for the file langgraph_reflection-0.0.1-py3-none-any.whl.
File metadata
- Download URL: langgraph_reflection-0.0.1-py3-none-any.whl
- Upload date:
- Size: 3.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bb273f22978ac037af4213a02cf79658208e32a6a25d87d754223e81ded3ef06
|
|
| MD5 |
87ef86366dcae7aed9f501cd4f8281e4
|
|
| BLAKE2b-256 |
a56d88bdedfb86f35458b6575d4b8f5fde167fc2bd64b265caf0d973cb8e1922
|