Skip to main content

Python client, based on fastmcp, for connecting to MCP servers through multiple protocols, specifically designed to work with integrated language models.

Project description

fastchat-mcp

alt text

License: MIT Version Last commit Commit activity Stars Forks Watchers Contributors

Python client, based on "mcp[cli]", for connecting to MCP servers through multiple protocols, specifically designed to work with integrated language models.

Table of Contents

Overview

This package provides a Python interface to connect to MCP servers in an easy, intuitive, and configurable way. It offers a modular architecture that allows for easy extension of new transfer protocols and language models. Currently includes support for HTTPStream and GPT-4 mini, with expansion capability for more options in the future.

Installation

To install the MCP client, you can use pip:

pip install fastchat-mcp

Implemented Models

The client currently supports the following language models:

Model Technical Description
gpt4o-mini Optimized implementation of the GPT-4 model that provides a balance between computational performance and resource efficiency. This model is specifically designed to operate in environments with memory constraints while maintaining superior predictive quality.

🚨 CRITICAL CONFIGURATION NOTE Currently, this project only work with gpt4o-mini llm model.

Implemented Transfer Protocols

Protocols for communication with MCP servers:

Protocol Status Technical Characteristics
HTTPStream Implemented Asynchronous HTTP-based protocol that enables continuous data streaming. Characterized by low memory consumption and real-time processing capability for partial responses.
SSE (Server-Sent Events) Not Implemented Unidirectional protocol that allows the server to send multiple updated events through a single HTTP connection. Designed specifically for applications requiring real-time updates from the server.
stdio Not Implemented Standard input/output interface that facilitates direct communication between processes. Will provide a lightweight alternative for local environments and unit testing.

🚨 CRITICAL CONFIGURATION NOTE Currently, this project only work with HTTPStream protocol.

Future Development Planning

Pending Language Models

  • Integration of additional language models
  • Implementation of dynamic model selection system
  • Optimization of model loading and management

Pending Protocols

  • Complete implementation of SSE for better real-time event handling
  • Development of stdio interface for local environments
  • Performance optimization across all protocols

System Requirements

Environmental Configuration

  • .env file: The .env file contains the authentication credentials necessary for integration with external services. This file must be created in the project root directory with the following format:

    # .env
    # OpenAI Authentication
    OPENAI_API_KEY=<YOUR OPENAI-API-KEY>
    
  • config.json file: The config.json file defines the configuration of available MCP servers. It must be created in the project root directory with the following structure:

    {
        "app_name": "fastchat-mcp",
        "mcp_servers": {
            "example_public_server": {
                "transport": "httpstream",
                "httpstream-url": "http://127.0.0.1:8000/public-example-server/mcp",
                "name": "example-public-server",
                "description": "Example public server."
            },
            "example_private_mcp": {
                "transport": "httpstream",
                "httpstream-url": "http://127.0.0.1:8000/private-example-server/mcp",
                "name": "example-private-server",
                "description": "Example private server with oauth required.",
                "auth": {
                    "required": true,
                    "post_body": {
                        "username": "user",
                        "password": "password"
                    }
                }
            },
            "github": {
                "transport": "httpstream",
                "httpstream-url": "https://api.githubcopilot.com/mcp",
                "name": "github",
                "description": "This server specializes in github operations.",
                "auth": {
                    "required": false,
                    "post_body": null
                },
                "headers": {
                    "Authorization": "Bearer {access_token}"
                }
            }
        }
    }
    

    If you need an MCP server to test the code, you can use simple-mcp-server.

Dependencies

  • Python = ">=3.11"
  • openai = "^1.68.2"
  • mcp[cli]
  • mcp-oauth

Usage Example

#example1.py
from fastchat import open_local_chat
open_local_chat()
#example2.py
from fastchat import Chat
chat: Chat = Chat()
while True:
    query = input("> ")
    if query == "":
        break
    for step in chat(query):
        print(f"<< {step.json}")

Alternatively, you may test this service using the following template available on GitHub:

# clone repo
git clone https://github.com/rb58853/template_mcp_llm_client.git

# change to project dir
cd template_mcp_llm_client

# install dependencies
pip install -r requirements.txt

# open in vscode
code .

Version History

Last Version Features

  • 💬 Fully functional streaming chat by passing a query; see Chat.
  • ⚙️ Integration with Tools, Resources, and Prompts from MCP servers, achieving a well-integrated client workflow with each of these services.
  • 🔐 Simple authentication system using mcp-oauth and this environmental configuration. Also integrate headers authorization.
  • 👾 OpenAI GPT as an integrated LLM using the model "gpt4o-mini".
  • 📡 Support for the httpstream transport protocol.
  • 💻 Easy console usage via open_local_chat(); see example1 for the use case.

See more in changelog

Project Status

⚠️ Important Notice: This project is currently in active development phase. As a result, errors or unexpected behaviors may occur during usage

License

MIT License. See license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastchat_mcp-0.1.3.tar.gz (20.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastchat_mcp-0.1.3-py3-none-any.whl (21.5 kB view details)

Uploaded Python 3

File details

Details for the file fastchat_mcp-0.1.3.tar.gz.

File metadata

  • Download URL: fastchat_mcp-0.1.3.tar.gz
  • Upload date:
  • Size: 20.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for fastchat_mcp-0.1.3.tar.gz
Algorithm Hash digest
SHA256 628cbf71fcd442242d42607a3da3ea0ed62e6f4ef3f9057d5fe854b7b34a3e29
MD5 c3200598a9186258dcc09ea5078eaca2
BLAKE2b-256 cab8ca3e01da33074a0bb2809b2ad7ae8235d8c330d3a6aecbf4dc30d3691494

See more details on using hashes here.

File details

Details for the file fastchat_mcp-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: fastchat_mcp-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 21.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for fastchat_mcp-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 7f9ee4bd73981897f2eb3c296534d0620232c84e7eca1efb653d453f7130184f
MD5 3fa0791c145bf1ae6ea96f1b54db6690
BLAKE2b-256 79ba4208c19bc2bf3b79611c3cd3dafa3b0892b60d192ac731bfe4780adec24e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page