A framework for model self-reflection and meta-cognitive processing
Project description
MCP-Reflect 🪞
MCP-Reflect is a Model Control Protocol (MCP) tool for enhancing model self-reflection capabilities. It helps improve AI responses through structured evaluation and feedback.
🌟 Features
- ✅ Qualitative Response Improvement - Transform model outputs into more accurate, clear, and complete versions
- ✅ Structured Self-Evaluation - Score responses across multiple quality dimensions
- ✅ Concrete Improvement Suggestions - Get actionable feedback for each dimension
- ✅ Multiple Processing Modes - Process responses independently, iteratively, or comparatively
- ✅ MCP-Compatible - Seamlessly integrates with Claude and other MCP-compatible assistants
📊 Evaluation Dimensions
MCP-Reflect evaluates model responses across these key dimensions:
- Accuracy: Factual correctness and absence of errors
- Clarity: How well-structured and easy to understand the response is
- Completeness: Whether all relevant aspects of the topic are addressed
- Relevance: How directly the response addresses the query
- Coherence: Logical flow and consistency of reasoning
- Conciseness: Appropriate length without unnecessary repetition
- Helpfulness: Practical value and actionability of the response
- Reasoning: Quality of logic, evidence, and argumentation
- Safety: Responsible handling of sensitive topics
🚀 Installation
Using pip
# Install from PyPI (stable releases)
pip install mcp-reflect
# Install with all dependencies (including optional)
pip install "mcp-reflect[core]"
# Install latest development version from TestPyPI
pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ "mcp-reflect[core]"
Using Poetry
# Install from PyPI (stable releases)
poetry add mcp-reflect
# Install from TestPyPI (development versions)
poetry source add --priority=supplemental test-pypi https://test.pypi.org/simple/
poetry add --source test-pypi mcp-reflect
Using UV
UV is a fast Python package installer written in Rust. To install with UV:
# Install UV if you haven't already
curl -sSf https://install.ultraviolet.rs | sh
# Install the package globally as a tool
uv tool install mcp-reflect
# Or install the package with pip
uv pip install mcp-reflect
# Run the package without installing
uvx mcp-reflect
💡 Usage
Starting the Server
Run the MCP server directly from the command line:
mcp-reflect
Or programmatically:
from mcp_reflect.server import run_server
# Start on a custom host and port
run_server(host="127.0.0.1", port=9000)
Running with HTTP Server
MCP-Reflect can provide an HTTP server for API access:
# Install with HTTP server support
pip install "mcp-reflect[all]"
# Install with UV (with HTTP server support)
uv tool install "mcp-reflect[all]"
# Run the HTTP server
mcp-reflect-uvx
You can customize the host and port using environment variables:
# Set custom host and port
export HOST=127.0.0.1
export PORT=8080
mcp-reflect-uvx
Or run it programmatically:
from mcp_reflect.server import run_uvx_server
# This will start an HTTP server on the specified host and port
run_uvx_server()
Once running, the server provides the MCP tools via HTTP endpoints that can be accessed by API clients.
Basic Reflection
The simplest way to use the tool is to pass a model response for reflection:
import asyncio
from mcp_reflect.server import reflect
async def improve_response():
result = await reflect(
response="The Earth is approximately 6000 years old according to some estimates.",
query="How old is the Earth?"
)
print(f"Improved response: {result.improved_response}")
print(f"Overall assessment: {result.overall_assessment}")
# Print scores for each dimension
for score in result.scores:
print(f"{score.dimension.value}: {score.score}/10 - {score.improvement_suggestion}")
asyncio.run(improve_response())
Sequential Processing
Process multiple responses with different strategies:
import asyncio
from mcp_reflect.server import sequential_reflect
async def process_multiple_responses():
responses = [
"The Earth is approximately 6000 years old according to some estimates.",
"The Earth formed about 4.5 billion years ago, but there are different methods to determine this."
]
# Process iteratively (each reflection builds on previous improvements)
results = await sequential_reflect(responses=responses, mode="iterative")
# Show the final improved response
print(f"Final improved response: {results[-1].improved_response}")
asyncio.run(process_multiple_responses())
Integration with Claude
MCP-Reflect is designed to work seamlessly with Claude and other MCP-compatible assistants. Here's how to use it with Claude:
- Start the MCP server:
mcp-reflect - Connect Claude to the server (usually handled by your client application)
- Call the reflection tool directly from Claude:
I'd like to analyze and improve my previous response. Could you use the reflect tool for this?
<response>
The Earth is approximately 6000 years old according to some estimates.
</response>
🧠 Advanced Usage
Custom Evaluation Focus
Focus on specific dimensions for targeted improvement:
import asyncio
from mcp_reflect.models import EvaluationDimension
from mcp_reflect.server import reflect
async def focused_evaluation():
result = await reflect(
response="The Earth is approximately 6000 years old according to some estimates.",
query="How old is the Earth?",
focus_dimensions=[
EvaluationDimension.ACCURACY,
EvaluationDimension.REASONING
]
)
# Print focused evaluation results
for score in result.scores:
print(f"{score.dimension.value}: {score.score}/10")
asyncio.run(focused_evaluation())
Custom Improvement Instructions
Provide specific guidance for improvement:
import asyncio
from mcp_reflect.server import reflect
async def guided_improvement():
result = await reflect(
response="The Earth is approximately 6000 years old according to some estimates.",
improvement_prompt="Add scientific consensus and methodologies used for dating."
)
print(result.improved_response)
asyncio.run(guided_improvement())
🔬 How It Works
MCP-Reflect uses a multi-stage process to evaluate and improve model responses:
- Analysis Phase: The original response is analyzed across multiple quality dimensions
- Scoring Phase: Each dimension receives a numerical score with specific reasoning
- Improvement Phase: Concrete suggestions for improvement are generated
- Synthesis Phase: An improved version of the response is created
- Packaging Phase: All insights are structured into a comprehensive result
🛠️ Development
Setup
# Clone the repository
git clone https://github.com/JonesH/mcp-reflect.git
cd mcp-reflect
# Install with Poetry
poetry install
# Run tests
poetry run pytest
Project Structure
mcp_reflect/models.py- Data models for evaluationmcp_reflect/evaluator.py- Core evaluation logicmcp_reflect/server.py- MCP server and tool definitionstests/- Test suite
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_reflect-0.1.0.tar.gz.
File metadata
- Download URL: mcp_reflect-0.1.0.tar.gz
- Upload date:
- Size: 12.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
89af826de287fa4575a2a3063eea08647090e12f33eedaacbe1b855626ced74f
|
|
| MD5 |
860ed6c79bac5b9293d1ea0e1a927868
|
|
| BLAKE2b-256 |
f7eae3cb50a7aac0978923ffce90465c40cd44dd83c464ea6e97bfadab16a5bf
|
Provenance
The following attestation bundles were made for mcp_reflect-0.1.0.tar.gz:
Publisher:
publish.yml on JonesH/mcp-reflect
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_reflect-0.1.0.tar.gz -
Subject digest:
89af826de287fa4575a2a3063eea08647090e12f33eedaacbe1b855626ced74f - Sigstore transparency entry: 210880823
- Sigstore integration time:
-
Permalink:
JonesH/mcp-reflect@9e4ba1a616478028ed6393890c70041c6f4e9bc2 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/JonesH
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9e4ba1a616478028ed6393890c70041c6f4e9bc2 -
Trigger Event:
workflow_run
-
Statement type:
File details
Details for the file mcp_reflect-0.1.0-py3-none-any.whl.
File metadata
- Download URL: mcp_reflect-0.1.0-py3-none-any.whl
- Upload date:
- Size: 12.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dfbb3a370625188ec11cb09606996e2dabefa4d107c434f561b2abded8c0e4a1
|
|
| MD5 |
6898aac577870018166ed7dead9e4fc5
|
|
| BLAKE2b-256 |
4abe346d2512012abba0866451096f3a8b0612850af3ebd3db54706d1b29aee9
|
Provenance
The following attestation bundles were made for mcp_reflect-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on JonesH/mcp-reflect
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mcp_reflect-0.1.0-py3-none-any.whl -
Subject digest:
dfbb3a370625188ec11cb09606996e2dabefa4d107c434f561b2abded8c0e4a1 - Sigstore transparency entry: 210880833
- Sigstore integration time:
-
Permalink:
JonesH/mcp-reflect@9e4ba1a616478028ed6393890c70041c6f4e9bc2 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/JonesH
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9e4ba1a616478028ed6393890c70041c6f4e9bc2 -
Trigger Event:
workflow_run
-
Statement type: