Skip to main content

Python client, based on fastmcp, for connecting to MCP servers through multiple protocols, specifically designed to work with integrated language models.

Project description

fastchat-mcp

alt text

License: MIT Version Last commit Commit activity Stars Forks Watchers Contributors

Python client, based on "mcp[cli]", for connecting to MCP servers through multiple protocols, specifically designed to work with integrated language models.

Table of Contents

Overview

This package provides a Python interface to connect to MCP servers in an easy, intuitive, and configurable way. It offers a modular architecture that allows for easy extension of new transfer protocols and language models. Currently includes support for HTTPStream and GPT-4 mini, with expansion capability for more options in the future.

Installation

To install the MCP client, you can use pip:

pip install fastchat-mcp

LLM Implementation

LLM Providers

The client currently supports the following language models:

Provider Status Technical Description
OpenAI Implemented OpenAI is a leading provider of artificial intelligence-based language models that develop advanced technologies for automatic text processing and generation through models like GPT.

🚨 CONFIGURATION NOTE Currently, this project only work with OpenAI llm provider.

Default Provider (OpenAI): OpenAI is a leading provider of artificial intelligence-based language models that develop advanced technologies for automatic text processing and generation through models like GPT.

LLM Models

This project can use any valid OpenAI language model, providing flexibility to choose the model that best fits your specific needs. To explore all available models, their features, and how to use them, it is recommended to consult the official OpenAI documentation.

To select a model, you should create a chat instance like this:

from fastchat import Chat
chat = Chat(model="my-openai-model-name", ...)

Default Model ("gpt4-o-mini"): gpt4-o-mini is an optimized implementation of the GPT-4 model that provides a balance between computational performance and resource efficiency. This model is specifically designed to operate in environments with memory constraints while maintaining superior predictive quality.

Implemented Transfer Protocols

Protocols for communication with MCP servers:

Protocol Status Technical Characteristics
stdio Implemented Standard input/output interface that facilitates direct communication between processes.
HTTPStream Implemented Asynchronous HTTP-based protocol that enables continuous data streaming.
SSE (Server-Sent Events) Not Implemented Unidirectional protocol that allows the server to send multiple updated events through a single HTTP connection.

🚨 CRITICAL CONFIGURATION NOTE Currently, this project don't work with SSE (Server-Sent Events) protocol.

System Requirements

Environmental Configuration

  • .env file: The .env file contains the authentication credentials necessary for integration with external services. This file must be created in the project root directory with the following format:

    # .env
    
    #CRIPTOGRAFY_KEY by token data storage (OAuth2)
    CRIPTOGRAFY_KEY=<any-criptografy-key>
    
    # OpenAI Authentication
    OPENAI_API_KEY=<your-openai-key>
    
  • fastchat.config.json file: The fastchat.config.json file defines the configuration of available MCP servers. It must be created in the project root directory with this structure

Dependencies

  • Python = ">=3.11"
  • openai = "^1.68.2"
  • mcp[cli]
  • mcp-oauth

File fastchat.config.json

This file defines the configuration of available MCP servers (Model Context Protocol) in the project. It must be placed in the root directory of the repository. Its main purpose is to inform the application which servers can be used and how to connect to them.

General Structure

The file is JSON formatted and follows this main structure:

{
    "app_name": "fastchat-mcp",
    "mcp_servers": {
    "..."
    }
}
  • app_name: The identifiable name of the appslication or project using these MCP servers.
  • mcp_servers: An object listing one or more configured MCP servers, each with its unique key.

Server Definition

Each MCP server inside "mcp_servers" has a custom configuration with these common properties:

  • Server key (e.g., "example_public_server", "github", etc.): internal name identifying this server.

  • protocol: Protocol or communication method. It can be:

    • "httpstream": Communication via HTTP streaming.
    • "stdio": Communication based on standard input/output (local command execution).

Server Configuration Examples

1. Public HTTP Stream Server

"example_public_server": {
    "protocol": "httpstream",
    "httpstream-url": "http://127.0.0.1:8000/public-example-server/mcp",
    "name": "example-public-server",
    "description": "Example public server."
}
  • httpstream-url: Base URL where the MCP HTTP streaming server is exposed.
  • No authentication required (public access).
  • "name" and "description" provide descriptive labels for users.

2. Private HTTP Stream Server with Authentication

"example_private_mcp": {
    "protocol": "httpstream",
    "httpstream-url": "http://127.0.0.1:8000/private-example-server/mcp",
    "name": "example-private-server",
    "description": "Example private server with oauth required.",
    "auth": {
        "required": true,
        "post_body": {
            "username": "user",
            "password": "password"
        }
    }
}
  • Adds an "auth" object on top of basic config:
    • required: true indicates authentication is needed.
    • post_body: Data sent for authentication (username and password here).
  • Suitable for servers secured with OAuth2.

3. GitHub Server with Authentication Headers

"github": {
    "protocol": "httpstream",
    "httpstream-url": "https://api.githubcopilot.com/mcp",
    "name": "github",
    "description": "This server specializes in github operations.",
    "headers": {
        "Authorization": "Bearer {your-github-access-token}"
    }
}
  • Uses a custom HTTP header "Authorization" for token-based authentication.
  • Perfect for sending API keys or tokens in headers to access the server.

4. Local Server using STDIO protocol

"my-stdio-server": {
    "protocol": "stdio",
    "name": "my-stdio-server",
    "config": {
        "command": "npx",
        "args": [
            "-y",
            "@modelcontextprotocol/example-stdio-server"
        ]
    }
}
  • Does not use HTTP; communication happens by executing local commands.
  • "config" specifies the command and arguments to run the MCP server. This key value(or body) has the same Claude Desktop sintaxis.
  • Useful for local integrations or development testing without networking.

Notes

see config.example.json

⚠️ Place this file in the project root so the application can detect it automatically.

💡 If you need an httpstream MCP server to test the code, you can use simple-mcp-server.

✍️ If you need help configuring a specific server or using this configuration in your code, feel free to open discussion for help!


Usage Example

#example1.py
from fastchat import open_local_chat
open_local_chat()

https://github.com/user-attachments/assets/1fcb0db8-5798-4745-8711-4b93198e36cc

#example2.py
from fastchat import Chat
chat: Chat = Chat()
while True:
    query = input("> ")
    if query == "":
        break
    for step in chat(query):
        print(f"<< {step.json}")

Alternatively, you may test this service using the following template available on GitHub:

# clone repo
git clone https://github.com/rb58853/template-fastchat-mcp.git
# change to project dir
cd template-fastchat-mcp
# install dependencies
pip install -r requirements.txt
# open in vscode
code .

Version History

Last Version Features

  • 💬 Fully functional streaming chat by passing a query; see Chat.
  • ⚙️ Integration with Tools, Resources, and Prompts from MCP servers, achieving a well-integrated client workflow with each of these services. Check flow
  • 🔐 Simple authentication system using mcp-oauth and this environmental configuration. Also integrate headers authorization.
  • 👾 OpenAI GPT as an integrated LLM using any valid OpenAI language model.
  • 📡 Support for the httpstream protocol protocol.
  • 📟 Support for the stdio protocol protocol.
  • 💻 Easy console usage via open_local_chat(); see example1 for the use case.

See more in changelog

Project Status

⚠️ Important Notice: This project is currently in active development phase. As a result, errors or unexpected behaviors may occur during usage

License

MIT License. See license


If you find this project helpful, please don’t forget to ⭐ star the repository

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastchat_mcp-1.0.0.tar.gz (26.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastchat_mcp-1.0.0-py3-none-any.whl (30.0 kB view details)

Uploaded Python 3

File details

Details for the file fastchat_mcp-1.0.0.tar.gz.

File metadata

  • Download URL: fastchat_mcp-1.0.0.tar.gz
  • Upload date:
  • Size: 26.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for fastchat_mcp-1.0.0.tar.gz
Algorithm Hash digest
SHA256 dc2bca5c426d3be0ce69e166673a84a188ea3db59f1d0bbacf1370e549b27870
MD5 facab37361f5d16dd5481f736d687227
BLAKE2b-256 bd40fe6c000ce49794b6c73614930f8ab0cd6f7ce2f54c05a75a738019515eb5

See more details on using hashes here.

File details

Details for the file fastchat_mcp-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: fastchat_mcp-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 30.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for fastchat_mcp-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fd0291f17c2095f2d5150f380a0e69b54232e3adee11c3e42b3d37741da5e757
MD5 58f87543744ceee7253cec59a654e608
BLAKE2b-256 52530bcc8f58d647909e4bd8262c22aa3cf4a29c63fc3b15eb20c09d63b472f2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page