Skip to main content

A tool to help agents read and query long documents.

Project description

MCP Long Context Reader

MCP Long Context Reader is a Python-based toolkit designed to overcome the context window limitations and high costs associated with Large Language Models (LLMs) processing extensive documents. It provides a FastMCP server with multiple, powerful strategies for an LLM agent to "read" and query long documents without needing to load the entire text into its context window.

This project features an intelligent, filesystem-based caching backend. When a document is processed for the first time with a specific strategy, the expensive work (like generating embeddings) is cached. Subsequent queries on the same document are significantly faster.

Features

This toolkit provides five distinct strategies as MCP tools:

  1. glance: Provides a quick look at the beginning of a file, showing the first few thousands characters and total line count.
  2. search_with_regex: Finds and extracts text snippets using regular expression patterns. Ideal for precise, pattern-based lookups.
  3. retrieve_with_rag: Uses a Retrieval-Augmented Generation (RAG) pipeline to find the most semantically relevant document chunks based on a natural language question.
  4. summarize_with_map_reduce: A classic "divide and conquer" strategy that summarizes large chunks in parallel and then combines those summaries. Best for getting the gist of a very long document.
  5. summarize_with_sequential_notes: An advanced strategy where an LLM reads the document sequentially, taking query-aware notes. Best for tasks requiring strict order and detail sensitivity (e.g., "needle-in-a-haystack").

Getting Started

1. Prerequisites

  • Python 3.10 or newer.
  • uv Python package manager. If you don't have it, install it with pip install uv.
  • An OpenAI API Key for the RAG and LLM-based strategies.

2. Environment Setup

First, clone the repository to your local machine:

git clone <repository-url>
cd mcp-long-context-reader

Next, create and activate a virtual environment using uv:

uv venv
source .venv/bin/activate
# On Windows, use: .venv\Scripts\activate

3. Install Dependencies

Install the project and its required dependencies:

uv pip install .

This command builds and installs the project and its core dependencies into your virtual environment, making it ready for use.

4. Configure Environment Variables

This project requires environment variables to be set for configuration and security.

  1. Workspace Directory (Required): You must specify a sandboxed directory from which the server is allowed to read files. This is a critical security measure.

    export MCP_WORKSPACE_DIRECTORY="/path/to/your/documents/dir"
    
  2. Model Provider & API Key (Required)

    Choose one of the following providers and set the corresponding environment variables.

    • OpenAI
    export MCP_API_PROVIDER="openai"
    export MCP_EMBEDDING_MODEL="text-embedding-3-small"
    export MCP_LLM_MODEL="gpt-4o"
    export OPENAI_API_KEY="sk-..."
    
    • DashScope
    export MCP_API_PROVIDER="dashscope"
    export MCP_EMBEDDING_MODEL="text-embedding-v3"
    export MCP_LLM_MODEL="qwen-max"
    export DASHSCOPE_API_KEY="sk-..."
    
  3. Cache Directory (Required): You must specify where to store the cache files.

    export MCP_CACHE_DIRECTORY="/path/to/your/cache"
    
  4. Optional Environment Variables

    • OpenAI API Base URL: If you are using a custom OpenAI API base URL, you can set it here.
    export OPENAI_API_BASE_URL="https://your.api.base.url/v1"
    

Usage

Starting the Server

To start the FastMCP server, set the required environment variables and run the server.py module from the project root:

uv run fastmcp run src/mcp_long_context_reader/server.py --transport sse --port 8000

This command sets up the MCP server on SSE at http://localhost:8000/sse. For detailed information, see the FastMCP Documentation.

Calling from a Client (Python)

Once the server is running, you can call its tools from a Python client. The following example demonstrates how to use the search_with_regex tool.

First, ensure you have fastmcp installed in your client environment: pip install fastmcp.

import asyncio
from fastmcp import Client

async def main():
    # Connect to the server running on localhost port 8000
    client = Client("http://localhost:8000/sse")

    async with client:
        result = await client.call_tool(
            "search_with_regex",
            {
                # This path should be relative to this python script
                "context_path": "path/to/context.txt",
                "regex_pattern": " hello ",
            },
        )
        print(result)

if __name__ == "__main__":
    asyncio.run(main())

You can find this and other examples in the examples/ folder.

JSON Configuration

The following configuration sets up the MCP server on stdio, which is useful for integrating with Claude Desktop. Remember to replace the placeholder with the absolute path to the server.py file in your cloned repository.

{
  "mcpServers": {
    "mcp-long-context-reader": {
      "command": "python",
      "args": [
        "/path/to/your/cloned/repo/src/mcp_long_context_reader/server.py"
      ],
      "env": {
        "MCP_WORKSPACE_DIRECTORY": "/path/to/your/documents/dir",
        "MCP_CACHE_DIRECTORY": "/path/to/your/cache",
        "MCP_API_PROVIDER": "openai",
        "MCP_EMBEDDING_MODEL": "text-embedding-3-small",
        "MCP_LLM_MODEL": "gpt-4o",
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Local Example

We have prepared a simple run-and-test script for you.

# 1. Set the necessary environment variables in examples/run_server_sse.sh
# 2. Run the server:
bash examples/run_server_sse.sh

# 3. In another terminal, run the client:
uv run examples/example_client.py

Development

Development Setup

If you plan to contribute to the project, you'll need to install the full set of development dependencies, which include tools for testing, formatting, and building documentation.

The recommended way is to use uv sync, which installs all packages from the uv.lock file:

uv sync

(Alternative option) This is equivalent to installing the dev extras defined in pyproject.toml:

uv pip install -e ".[dev]"

Running Tests

The project uses pytest for testing. To run the full test suite, execute the following command:

uv run pytest

Building Documentation

The documentation is generated using Sphinx. To build the HTML documentation locally, navigate to the docs/ directory and use the provided Makefile:

cd docs
make html

After the build is complete, you can view the documentation by opening docs/build/html/index.html in your web browser.

Code Quality and Pre-commit Hooks

This project uses pre-commit to maintain code quality. To set up:

uv run pre-commit install

To run checks manually:

uv run pre-commit run --all-files

All code must be checked before committing.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_long_context_reader-0.9.0.tar.gz (798.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_long_context_reader-0.9.0-py3-none-any.whl (23.4 kB view details)

Uploaded Python 3

File details

Details for the file mcp_long_context_reader-0.9.0.tar.gz.

File metadata

  • Download URL: mcp_long_context_reader-0.9.0.tar.gz
  • Upload date:
  • Size: 798.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.18

File hashes

Hashes for mcp_long_context_reader-0.9.0.tar.gz
Algorithm Hash digest
SHA256 af050a946b25195c35343d6bf233fb5fd001e33dad38233e434a6195bef5f1c0
MD5 25abd41bcecec7a2ed9d1d1e3b39c21f
BLAKE2b-256 d8431fe3a5731008199b2a95d3d41e161490d5a52686ec9e696f1de1aaf832f7

See more details on using hashes here.

File details

Details for the file mcp_long_context_reader-0.9.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_long_context_reader-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 42fdbfdb5c2456d3aae9c0e8273893517365b6da7651f7792695d5c32a8f480e
MD5 ac8a9e54b5ecf73c740b93b0212f136f
BLAKE2b-256 b7c6a5652d5977dd5ffeacfc877c6edecfb333de715a383754875edd24013164

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page