MCP (Model Context Protocol) wrapper for coaiapy observability toolkit
Project description
coaiapy-mcp
MCP (Model Context Protocol) wrapper for coaiapy observability toolkit
๐ฏ Overview
coaiapy-mcp exposes the powerful capabilities of coaiapy through the Model Context Protocol (MCP), enabling any MCP-compatible LLM to leverage:
- Langfuse Observability: Traces, observations, prompts, datasets, score configurations
- Redis Data Stashing: Persistent key-value storage
- Pipeline Automation: Template-based workflow creation
- Audio Processing: Transcription and synthesis via AWS Polly
- Persona Prompts: Mia & Miette dual AI embodiment for narrative-driven technical work
Why coaiapy-mcp?
Separation of Concerns:
coaiapy: Core functionality (Python 3.6+ for Pythonista iOS compatibility)coaiapy-mcp: Modern MCP wrapper (Python 3.10+)- Both packages coexist independently without dependency conflicts
LLM Integration:
- Standardized MCP protocol interface
- Type-safe tools, resources, and prompts
- Works with any MCP-compatible LLM (Claude, GPT-4, etc.)
๐ฆ Installation
# Install coaiapy-mcp (includes coaiapy as dependency)
pip install coaiapy-mcp
# Or install from source
git clone https://github.com/jgwill/coaiapy-mcp.git
cd coaiapy-mcp
pip install -e .
Prerequisites
- Python 3.10 or higher
- Redis server (for tash/fetch operations)
- AWS credentials (for audio processing)
- Langfuse account (for observability features)
๐ Quick Start
1. Start the MCP Server
coaiapy-mcp start
2. Connect Your LLM
Configure your MCP-compatible LLM client to connect to the server:
{
"mcpServers": {
"coaiapy": {
"command": "coaiapy-mcp",
"args": ["start"]
}
}
}
3. Use MCP Tools
Example: Create Langfuse Trace
# In your LLM conversation
Use coaia_fuse_trace_create to create a trace:
- trace_id: "550e8400-e29b-41d4-a716-446655440000"
- user_id: "john_doe"
- name: "Data Pipeline Execution"
Example: Stash to Redis
Use coaia_tash to store data:
- key: "pipeline_result"
- value: "Processing completed successfully"
Example: Load Mia & Miette Prompt
Use mia_miette_duo prompt with variables:
- task_context: "Design observability pipeline"
- technical_details: "Langfuse traces with nested observations"
- creative_goal: "Narrative-driven pipeline creation"
๐ ๏ธ Available Tools (Phase 1)
Redis Operations
| Tool | Description | Parameters |
|---|---|---|
coaia_tash |
Stash key-value to Redis | key: str, value: str |
coaia_fetch |
Fetch value from Redis | key: str |
Langfuse Traces
| Tool | Description | Parameters |
|---|---|---|
coaia_fuse_trace_create |
Create new trace | trace_id, user_id?, session_id?, name?, input_data?, output_data?, metadata? |
coaia_fuse_add_observation |
Add observation to trace | observation_id, trace_id, name, type?, parent_id?, input_data?, output_data?, metadata?, start_time?, end_time? |
coaia_fuse_add_observations_batch |
Batch add observations | trace_id, observations: list |
coaia_fuse_trace_get |
Get specific trace | trace_id, json_output? |
coaia_fuse_trace_view |
View trace tree (JSON) | trace_id |
coaia_fuse_traces_list |
NEW List traces with filters | session_id?, user_id?, name?, tags?, from_timestamp?, to_timestamp?, order_by?, version?, release?, environment?, page?, limit?, json_output? |
coaia_fuse_traces_session_view |
View traces by session | session_id, json_output? |
IMPORTANT: When creating traces and observations, use input_data for context/inputs and output_data for results/outputs. Use metadata only for additional tags and labels.
Langfuse Prompts
| Tool | Description | Parameters |
|---|---|---|
coaia_fuse_prompts_list |
List all prompts | |
coaia_fuse_prompts_get |
Get specific prompt | name, label? |
Langfuse Datasets
| Tool | Description | Parameters |
|---|---|---|
coaia_fuse_datasets_list |
List all datasets | |
coaia_fuse_datasets_get |
Get specific dataset | name |
Langfuse Score Configurations
| Tool | Description | Parameters |
|---|---|---|
coaia_fuse_score_configs_list |
List configurations | |
coaia_fuse_score_configs_get |
Get specific config | name_or_id: str |
coaia_fuse_score_apply |
Apply score to trace/observation | config_name_or_id: str, target_type: str, target_id: str, value: any, observation_id?: str, comment?: str |
Score Application Examples:
# Apply numeric score to a trace
Use coaia_fuse_score_apply:
- config_name_or_id: "accuracy"
- target_type: "trace"
- target_id: "trace-123"
- value: 0.95
# Apply categorical score to an observation
Use coaia_fuse_score_apply:
- config_name_or_id: "quality-rating"
- target_type: "trace"
- target_id: "trace-123"
- observation_id: "obs-456"
- value: "excellent"
- comment: "High quality output with clear reasoning"
๐ Available Resources (Phase 1)
| Resource URI | Content Type | Description |
|---|---|---|
coaia://templates/ |
application/json |
List of 5 built-in pipeline templates |
coaia://templates/{name} |
application/json |
Specific template with variables |
Example Usage:
# List available templates
Read coaia://templates/
# Get specific template
Read coaia://templates/data-pipeline
๐จ Available Prompts (Phase 1)
๐ง ๐ธ Mia & Miette Duo Embodiment
Prompt ID: mia_miette_duo
Dual AI embodiment for narrative-driven technical work:
- Mia (๐ง ): Recursive DevOps Architect & Narrative Lattice Forger
- Miette (๐ธ): Emotional Explainer Sprite & Narrative Echo
Variables:
task_context: High-level task descriptiontechnical_details: Specific technical requirementscreative_goal: Desired creative outcome
Use Cases:
- System architecture design with narrative clarity
- Technical explanations with emotional resonance
- Creative-oriented problem resolution
๐ Create Observability Pipeline
Prompt ID: create_observability_pipeline
Step-by-step guide for Langfuse pipeline creation.
Variables:
trace_name: Name of the traceuser_id: User identifiersteps: Pipeline steps (comma-separated)
๐๏ธ Analyze Audio Workflow
Prompt ID: analyze_audio_workflow
Workflow for audio transcription and summarization.
Variables:
file_path: Path to audio filesummary_style: Summarization style (concise, detailed, narrative)
๐ Examples
Complete Observability Workflow
# 1. Create trace with input/output data (PREFERRED)
trace_id = "550e8400-e29b-41d4-a716-446655440000"
result = coaia_fuse_trace_create(
trace_id=trace_id,
user_id="data_engineer",
name="ETL Pipeline Execution",
input_data={
"source": "sales_database",
"query": "SELECT * FROM transactions WHERE date > '2024-01-01'",
"parameters": {"limit": 1000}
},
output_data={
"rows_processed": 1000,
"status": "success",
"duration_ms": 1234
},
metadata={
"environment": "production",
"version": "1.0.0"
}
)
# 2. Add observations with input/output (PREFERRED)
obs_id_1 = "660e8400-e29b-41d4-a716-446655440001"
coaia_fuse_add_observation(
observation_id=obs_id_1,
trace_id=trace_id,
name="Data Validation",
observation_type="SPAN",
input_data={
"schema_version": "v2",
"validation_rules": ["not_null", "unique_id"]
},
output_data={
"valid_rows": 995,
"invalid_rows": 5,
"errors": ["duplicate_id: row_123"]
},
metadata={
"validator": "json_schema_v4"
}
)
# 2. Add observations
obs_id_1 = "660e8400-e29b-41d4-a716-446655440001"
coaia_fuse_add_observation(
observation_id=obs_id_1,
trace_id=trace_id,
name="Data Validation",
type="SPAN"
)
obs_id_2 = "660e8400-e29b-41d4-a716-446655440002"
coaia_fuse_add_observation(
observation_id=obs_id_2,
trace_id=trace_id,
name="Data Transformation",
observation_type="SPAN",
parent_id=obs_id_1,
input_data={
"valid_rows": 995,
"transformation": "normalize_dates"
},
output_data={
"transformed_rows": 995,
"format": "iso8601"
}
)
# 3. View trace tree
trace_data = coaia_fuse_trace_view(trace_id=trace_id)
# 4. Stash results to Redis
coaia_tash("etl_trace_id", trace_id)
Best Practice: Always use input_data and output_data fields to capture what went into an operation and what came out. Reserve metadata for tags, labels, and configuration details.
Using Template Resources
# List available templates
templates = read_resource("coaia://templates/")
# Returns: ["simple-trace", "data-pipeline", "llm-chain", ...]
# Get specific template
template_data = read_resource("coaia://templates/data-pipeline")
# Returns: {
# "name": "data-pipeline",
# "description": "Multi-step data processing workflow",
# "variables": ["pipeline_name", "data_source", ...],
# "steps": [...]
# }
Mia & Miette Narrative Architecture
# Load Mia & Miette prompt
Use prompt: mia_miette_duo
Variables:
- task_context: "Design microservices architecture for storytelling platform"
- technical_details: "Event-driven system with Langfuse observability"
- creative_goal: "Narrative-driven creation workflow with structural tension"
# Response will include:
# ๐ง Mia: Technical architecture with structural precision
# ๐ธ Miette: Emotional illumination and intuitive clarity
โ๏ธ Configuration
Environment Variables
# Feature Configuration (controls which tools/prompts/resources are exposed)
export COAIAPY_MCP_FEATURES="STANDARD" # Options: MINIMAL, STANDARD, OBSERVABILITY, FULL
# Langfuse Configuration
export LANGFUSE_SECRET_KEY="sk-lf-..."
export LANGFUSE_PUBLIC_KEY="pk-lf-..."
export LANGFUSE_HOST="https://cloud.langfuse.com"
# AWS Configuration (for audio processing)
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"
# Redis Configuration
export REDIS_HOST="localhost"
export REDIS_PORT="6379"
export REDIS_DB="0"
Feature Configuration
Control which MCP features are exposed to reduce token usage in Claude Code context:
Feature Levels
MINIMAL (Lowest token usage)
- Tools: Core observability only
- Redis: tash, fetch
- Traces: create, view, patch, add observations
- Langfuse management: prompts, datasets, score configs, comments
- Prompts: None
- Resources: None
- Token savings: ~3000 tokens vs FULL
STANDARD (Default - Balanced)
- Tools: Same as MINIMAL
- Prompts: Workflow guides only
create_observability_pipelineanalyze_audio_workflow
- Resources: Pipeline templates
- Token savings: ~1300 tokens vs FULL
OBSERVABILITY (Observability-focused)
- Tools: Same as STANDARD
- Prompts: Same as STANDARD
- Resources: Same as STANDARD
- Token savings: ~1300 tokens vs FULL
FULL (Everything)
- Tools: All tools including media upload
- Prompts: All prompts including Mia & Miette persona
mia_miette_duo(dual AI embodiment)create_observability_pipelineanalyze_audio_workflow
- Resources: All resources
- Token savings: 0 (baseline)
Usage
# Use MINIMAL for basic trace creation (lowest token usage)
export COAIAPY_MCP_FEATURES="MINIMAL"
# Use STANDARD for everyday workflows (default)
export COAIAPY_MCP_FEATURES="STANDARD"
# Use FULL for Mia & Miette persona and media features
export COAIAPY_MCP_FEATURES="FULL"
The feature level is logged on server startup:
INFO - Starting coaiapy-mcp server with feature level: STANDARD
INFO - Enabled features: 18 tools, 2 prompts, 1 resources
MCP Server Configuration
Create coaiapy-mcp.json:
{
"server": {
"host": "localhost",
"port": 3000
},
"logging": {
"level": "info",
"file": "/var/log/coaiapy-mcp.log"
},
"cache": {
"enabled": true,
"ttl": 3600
}
}
๐งช Development
Running Tests
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/
# Run with coverage
pytest --cov=coaiapy_mcp tests/
# Run specific test
pytest tests/test_tools.py::test_tash_fetch_roundtrip
Project Structure
coaiapy-mcp/
โโโ coaiapy_mcp/
โ โโโ __init__.py
โ โโโ server.py # MCP server implementation
โ โโโ tools.py # Tool wrappers (subprocess)
โ โโโ resources.py # Resource providers
โ โโโ prompts.py # Prompt templates
โโโ tests/
โ โโโ test_tools.py
โ โโโ test_resources.py
โ โโโ test_prompts.py
โโโ pyproject.toml
โโโ setup.py
โโโ README.md # This file
โโโ IMPLEMENTATION_PLAN.md # Detailed implementation plan
โโโ ROADMAP.md # Future enhancements
๐บ๏ธ Roadmap
See ROADMAP.md for detailed release schedule.
Upcoming Features:
- v0.2.0: Pipeline automation tools (pipeline create, env management)
- v0.3.0: Audio processing tools (transcribe, summarize)
- v0.4.0+: Advanced features (sessions, scores, streaming, caching)
๐ค Contributing
Contributions welcome! See IMPLEMENTATION_PLAN.md for development guidelines.
Good First Issues
- Add new prompt templates
- Write usage examples
- Improve error messages
- Add input validation
How to Contribute
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
๐ License
Same license as coaiapy (MIT assumed)
๐ Links
- coaiapy Package: https://pypi.org/project/coaiapy/
- MCP Protocol: https://github.com/modelcontextprotocol
- Langfuse: https://langfuse.com/
- Documentation: [Coming Soon]
๐ Support
- Issues: https://github.com/jgwill/coaiapy-mcp/issues
- Discussions: https://github.com/jgwill/coaiapy-mcp/discussions
๐ Acknowledgments
- coaiapy: The underlying observability toolkit
- MCP Community: Model Context Protocol development
- Langfuse: Observability infrastructure
- Mia & Miette: Dual AI embodiment concept by Guillaume Isabelle
Status: ๐ต Planning Phase (Pre-v0.1.0) Next Milestone: Phase 1 - Core Langfuse Observability Last Updated: 2025-10-16
๐ Implementation Status
Phase 1 (Core Langfuse Observability): [DONE] COMPLETE
What's Implemented
[DONE] Package Structure - Modern Python packaging with pyproject.toml
[DONE] Library Import Approach - Direct imports from coaiapy, langfuse, redis (not subprocess)
[DONE] Configuration Loading - Single config load via coaiamodule.read_config()
[DONE] Client Initialization - Redis and Langfuse clients initialized once, shared across tools
[DONE] Graceful Degradation - Tools work even when services unavailable
[DONE] Error Handling - All tools return success/error dicts, never crash
Tools Implemented (13 total)
Redis Tools (2)
coaia_tash- Stash key-value to Rediscoaia_fetch- Fetch value from Redis
Langfuse Trace Tools (4)
coaia_fuse_trace_create- Create new tracecoaia_fuse_add_observation- Add observation to tracecoaia_fuse_trace_view- View trace detailscoaia_fuse_traces_list- NEW List traces with comprehensive filtering (session, user, name, tags, timestamps, etc.)
Langfuse Prompts Tools (2)
coaia_fuse_prompts_list- List all promptscoaia_fuse_prompts_get- Get specific prompt
Langfuse Datasets Tools (2)
coaia_fuse_datasets_list- List all datasetscoaia_fuse_datasets_get- Get specific dataset
Langfuse Score Configs Tools (3)
coaia_fuse_score_configs_list- List configurationscoaia_fuse_score_configs_get- Get specific configcoaia_fuse_score_apply- Apply score config to trace/observation with validationcoaia_fuse_score_apply- Apply score config to trace/observation with validation
Resources Implemented (3)
coaia://templates/- List all pipeline templatescoaia://templates/{name}- Get specific templatecoaia://templates/{name}/variables- Get template variables
Prompts Implemented (3)
mia_miette_duo- Dual AI embodiment (Mia & Miette)create_observability_pipeline- Guided Langfuse pipeline creationanalyze_audio_workflow- Audio transcription & summarization
๐งช Testing
Run Tests
# Install test dependencies
pip install pytest pytest-asyncio
# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_prompts.py -v
# Run with coverage
pytest --cov=coaiapy_mcp tests/
Validation Script
Run comprehensive validation without external services:
python validate_implementation.py
This validates:
- Package structure and metadata
- All tool registrations
- Prompt rendering
- Resource loading
- Server module structure
Test Results
- Prompts: 12/12 tests passing [DONE]
- Resources: 6/6 tests passing [DONE]
- Tools: 8/12 passing (4 failures expected due to network connectivity) [DONE]
๐ Implementation Notes
Why Library Imports Instead of Subprocess?
The original plan called for subprocess wrappers, but this approach has:
Problems:
- โ Environment variable propagation issues
- โ Slower execution (process creation overhead)
- โ Complex error handling (parsing stderr)
- โ Credential management challenges
Benefits of library imports:
- [DONE] Direct Python function calls - fast and clean
- [DONE] Proper exception handling with typed errors
- [DONE] Direct access to return values (no JSON parsing)
- [DONE] Shared configuration (load once, use everywhere)
- [DONE] No environment variable inheritance issues
Configuration Management
Configuration is loaded once on module import via coaiamodule.read_config():
from coaiapy import coaiamodule
# Load config once
config = coaiamodule.read_config()
# Initialize clients with config
redis_client = redis.Redis(**config.get("jtaleconf", {}))
langfuse_client = Langfuse(
secret_key=config.get("langfuse_secret_key"),
public_key=config.get("langfuse_public_key"),
host=config.get("langfuse_host", "https://cloud.langfuse.com")
)
Error Handling Pattern
All tools follow a consistent error handling pattern:
async def tool_function(params) -> Dict[str, Any]:
try:
# Perform operation
result = do_something(params)
return {
"success": True,
"data": result
}
except Exception as e:
return {
"success": False,
"error": str(e)
}
This ensures:
- No uncaught exceptions crash the MCP server
- Consistent response format for all tools
- Proper error messages for debugging
๐ Code Quality
- [DONE] Type hints throughout
- [DONE] Comprehensive docstrings
- [DONE] Async/await patterns
- [DONE] Error handling best practices
- [DONE] Modular design (tools, resources, prompts, server)
- [DONE] Test coverage for all modules
๐ฏ Next Steps
Phase 2: Pipeline Automation
-
coaia_pipeline_create- Create pipeline from template -
coaia_pipeline_list- List pipeline templates -
coaia_pipeline_show- Show template details - Environment resources (
coaia://env/global,coaia://env/project)
Phase 3: Audio Processing
-
coaia_transcribe- Transcribe audio file -
coaia_summarize- Summarize text -
coaia_process_tag- Process with custom tags
Future Enhancements
- Streaming support for long-running operations
- Caching layer for frequently accessed resources
- Batch operations for traces/observations
- Performance monitoring and metrics
- Enhanced error recovery
Implementation completed: 2025-10-17
Status: Phase 1 Complete [DONE]
Approach: Library imports (not subprocess)
Test Coverage: Comprehensive (20+ tests)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file coaiapy_mcp-0.4.0.tar.gz.
File metadata
- Download URL: coaiapy_mcp-0.4.0.tar.gz
- Upload date:
- Size: 45.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
17446ef15aa6539fa700b09a961c3af0c2d7dfc4eb573c92d4d1108b28ab4fd6
|
|
| MD5 |
2e1394431ef69d5c67a888896a0c306d
|
|
| BLAKE2b-256 |
358d7433a79ff7c2df2a4cc9dc931e926e1f14ad55a860ad74554cba39b67ea5
|
File details
Details for the file coaiapy_mcp-0.4.0-py3-none-any.whl.
File metadata
- Download URL: coaiapy_mcp-0.4.0-py3-none-any.whl
- Upload date:
- Size: 31.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
964b16e560cc9eaa59cecff05a2b6ac898ab2b5d0ad2f82aa43b3d9e9ce79992
|
|
| MD5 |
5b8ac7d09fb026889dcaecea2c938de5
|
|
| BLAKE2b-256 |
d872d4091749ce5ec805d019e7e6f68d08f1b75fa9827295cae292351efc95bf
|