Skip to main content

Langfuse MCP server for accessing and analyzing telemetry data via natural language

Project description

Langfuse MCP (Model Context Protocol)

Test PyPI version Python 3.10-3.13 License: MIT

This project provides a Model Context Protocol (MCP) server for Langfuse, allowing AI agents to query Langfuse trace data for better debugging and observability.

Quick Start with Cursor

Add to Cursor

Installation Options

🎯 From Cursor IDE: Click the button above (works seamlessly!)
🌐 From GitHub Web: Copy this deeplink and paste into your browser address bar:

cursor://anysphere.cursor-deeplink/mcp/install?name=langfuse-mcp&config=eyJjb21tYW5kIjoidXZ4IiwiYXJncyI6WyJsYW5nZnVzZS1tY3AiLCItLXB1YmxpYy1rZXkiLCJZT1VSX1BVQkxJQ19LRVkiLCItLXNlY3JldC1rZXkiLCJZT1VSX1NFQ1JFVF9LRVkiLCItLWhvc3QiLCJodHRwczovL2Nsb3VkLmxhbmdmdXNlLmNvbSJdfQ==

⚙️ Manual Setup: See Configuration section below

💡 Note: The "Add to Cursor" button only works from within Cursor IDE due to browser security restrictions on custom protocols (cursor://). This is normal and expected behavior per Cursor's documentation.

After installation: Replace YOUR_PUBLIC_KEY and YOUR_SECRET_KEY with your actual Langfuse credentials in Cursor's MCP settings.

Features

  • Integration with Langfuse for trace and observation data
  • Prompt management - get, list, create, and update prompts
  • Tool suite for AI agents to query trace data
  • Exception and error tracking capabilities
  • Session and user activity monitoring

Available Tools

The MCP server exposes 18 tools, grouped by domain:

Traces

  • fetch_traces - Search/filter traces with pagination
  • fetch_trace - Fetch a specific trace by ID

Observations

  • fetch_observations - Search/filter observations with pagination
  • fetch_observation - Fetch a specific observation by ID

Sessions

  • fetch_sessions - List recent sessions with pagination
  • get_session_details - Get detailed session info by ID
  • get_user_sessions - Get all sessions for a user

Exceptions

  • find_exceptions - Find exceptions grouped by file/function/type
  • find_exceptions_in_file - Find exceptions in a specific file
  • get_exception_details - Get detailed info about a specific exception
  • get_error_count - Get total error count

Prompts

  • get_prompt - Fetch prompt with resolved dependencies
  • get_prompt_unresolved - Fetch prompt with dependency tags intact (falls back to resolved content if SDK lacks resolve support)
  • list_prompts - List/filter prompts with pagination
  • create_text_prompt - Create new text prompt version
  • create_chat_prompt - Create new chat prompt version
  • update_prompt_labels - Update labels for a prompt version

Schema

  • get_data_schema - Get schema information for the data structures used in responses

Setup

Install uv

First, make sure uv is installed. For installation instructions, see the uv installation docs.

If you already have an older version of uv installed, you might need to update it with uv self update.

Installation

Requirement: The server depends on the Langfuse Python SDK v3. Installations automatically pull langfuse>=3.11.2 and require Python 3.10–3.13 while upstream SDK support for 3.14 is pending.

uv pip install langfuse-mcp

If you're iterating on this repository, install the local checkout instead of PyPI:

# from the repo root
uv pip install --editable .

Recommended local environment

For development we suggest creating an isolated environment pinned to Python 3.11 (the version used in CI):

uv venv --python 3.11 .venv
source .venv/bin/activate  # On Windows use: .venv\Scripts\activate
uv pip install --python .venv/bin/python -e .

All subsequent examples assume the virtual environment is activated.

Obtain Langfuse credentials

You'll need your Langfuse credentials:

You can store these in a local .env file instead of passing CLI flags each time:

LANGFUSE_PUBLIC_KEY=your_public_key
LANGFUSE_SECRET_KEY=your_secret_key
LANGFUSE_HOST=https://cloud.langfuse.com

When present, the MCP server reads these values automatically. CLI arguments still override the environment if provided.

Running the Server

Run the server using uvx or the project virtual environment:

uvx langfuse-mcp --public-key YOUR_KEY --secret-key YOUR_SECRET --host https://cloud.langfuse.com

# or, once inside the repo virtual environment
langfuse-mcp --public-key YOUR_KEY --secret-key YOUR_SECRET --host https://cloud.langfuse.com

Local checkout tip: During development run uv run --from /path/to/langfuse-mcp langfuse-mcp ... (or uv run python -m langfuse_mcp ...) so uv executes the code in your working tree. Using the PyPI shortcut skips repository-only changes such as the new environment-based credential defaults and logging tweaks.

The server writes diagnostic logs to /tmp/langfuse_mcp.log. Remove the --host switch if you are targeting the default Cloud endpoint. Use --log-level (e.g., --log-level DEBUG) and --log-to-console to control verbosity during debugging.

Selective Tool Loading

Use --tools to load only specific tool groups, reducing token overhead:

# Load only trace and prompt tools
langfuse-mcp --tools traces,prompts

# Available groups: traces, observations, sessions, exceptions, prompts, schema
# Default: all

Or set via environment: LANGFUSE_MCP_TOOLS=traces,prompts

Run with Docker

Option 1: Pull from GitHub Container Registry (Recommended)

Pull and run the pre-built image:

docker pull ghcr.io/avivsinai/langfuse-mcp:latest
docker run --rm -i \
  -e LANGFUSE_PUBLIC_KEY=YOUR_PUBLIC_KEY \
  -e LANGFUSE_SECRET_KEY=YOUR_SECRET_KEY \
  -e LANGFUSE_HOST=https://cloud.langfuse.com \
  -e LANGFUSE_MCP_LOG_FILE=/logs/langfuse_mcp.log \
  -v "$(pwd)/logs:/logs" \
  ghcr.io/avivsinai/langfuse-mcp:latest

Available tags:

  • latest - Most recent release
  • v0.2.0 - Specific version
  • 0.2 - Major.minor version

Option 2: Build from source

Build the image from the repository root so the container installs the current checkout instead of the latest PyPI release:

docker build -t langfuse-logs-mcp .
docker run --rm -i \
  -e LANGFUSE_PUBLIC_KEY=YOUR_PUBLIC_KEY \
  -e LANGFUSE_SECRET_KEY=YOUR_SECRET_KEY \
  -e LANGFUSE_HOST=https://cloud.langfuse.com \
  -e LANGFUSE_MCP_LOG_FILE=/logs/langfuse_mcp.log \
  -v "$(pwd)/logs:/logs" \
  langfuse-logs-mcp

Why no -t? Allocating a pseudo-TTY can interfere with MCP stdio clients. Use -i only so the server communicates over plain stdin/stdout.

The Dockerfile copies the local source tree and installs it with pip install ., so the container always runs your latest commits - a must while testing features that have not shipped on PyPI.

Configuration with MCP clients

Configure for Cursor

Create a .cursor/mcp.json file in your project root:

{
  "mcpServers": {
    "langfuse": {
      "command": "uvx",
      "args": ["langfuse-mcp", "--public-key", "YOUR_KEY", "--secret-key", "YOUR_SECRET", "--host", "https://cloud.langfuse.com"]
    }
  }
}

Configure for Claude Desktop

Add to your Claude settings:

{
  "command": ["uvx"],
  "args": ["langfuse-mcp"],
  "type": "stdio",
  "env": {
    "LANGFUSE_PUBLIC_KEY": "YOUR_KEY",
    "LANGFUSE_SECRET_KEY": "YOUR_SECRET",
    "LANGFUSE_HOST": "https://cloud.langfuse.com"
  }
}

Output Modes

Each tool supports different output modes to control the level of detail in responses:

  • compact (default): Returns a summary with large values truncated
  • full_json_string: Returns the complete data as a JSON string
  • full_json_file: Saves the complete data to a file and returns a summary with file information

Development

Clone the repository

git clone https://github.com/yourusername/langfuse-mcp.git
cd langfuse-mcp

Create a virtual environment and install dependencies

uv venv --python 3.11 .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
uv pip install --python .venv/bin/python -e ".[dev]"

Set up environment variables

export LANGFUSE_SECRET_KEY="your-secret-key"
export LANGFUSE_PUBLIC_KEY="your-public-key"
export LANGFUSE_HOST="https://cloud.langfuse.com"  # Or your self-hosted URL

Testing

Run the unit test suite (mirrors CI):

pytest

To run the demo client:

uv run examples/langfuse_client_demo.py --public-key YOUR_PUBLIC_KEY --secret-key YOUR_SECRET_KEY

Version Management

This project uses dynamic versioning based on Git tags:

  1. The version is automatically determined from git tags using uv-dynamic-versioning
  2. To create a new release:
    • Tag your commit with git tag v0.1.2 (following semantic versioning)
    • Push the tag with git push --tags
    • Create a GitHub release from the tag
  3. The GitHub workflow will automatically build and publish the package with the correct version to PyPI

For a detailed history of changes, please see the CHANGELOG.md file.

Update Notes

  • Prompt management - get, list, create, and update prompts directly from your AI agent
  • SDK floor - langfuse>=3.11.2 (capped at <4.0.0)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Cache Management

We use the cachetools library to implement efficient caching with proper size limits:

  • Uses cachetools.LRUCache for better reliability
  • Configurable cache size via the CACHE_SIZE constant
  • Automatically evicts the least recently used items when caches exceed their size limits

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langfuse_mcp-0.3.0.tar.gz (29.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langfuse_mcp-0.3.0-py3-none-any.whl (29.0 kB view details)

Uploaded Python 3

File details

Details for the file langfuse_mcp-0.3.0.tar.gz.

File metadata

  • Download URL: langfuse_mcp-0.3.0.tar.gz
  • Upload date:
  • Size: 29.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langfuse_mcp-0.3.0.tar.gz
Algorithm Hash digest
SHA256 33e46f161fd76eeb85fbaf0bb5c4b5112e1fa579a5f079963f99fe5e3d5d0171
MD5 5f44fc3d0d044350351323c54484ebe7
BLAKE2b-256 32f418ed1d3ea45b8a5f6dd9184fbfd066fa941806b1a33a16f71988406d667c

See more details on using hashes here.

File details

Details for the file langfuse_mcp-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: langfuse_mcp-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 29.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langfuse_mcp-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b4355538a8fa4d265ff2a2c3f52f1792ad0347576cc882081b5ce45e5aca3076
MD5 e133ff4e63ea45fe35d1afac0562c931
BLAKE2b-256 7071d78567bea594f98fafca5ccb0b5eb167ba0ed69356b616d6fe30562f9b53

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page