Skip to main content

Python client, based on fastmcp, for connecting to MCP servers through multiple protocols, specifically designed to work with integrated language models.

Project description

Fastchat MCP

alt text

License: MIT Version Last commit Commit activity Stars Forks Watchers Contributors

Python chat client, based on "mcp[cli]", for connecting to MCP servers through multiple protocols, specifically designed to work with integrated language models.

Table of Contents

Overview

This package provides a Python interface to connect to MCP servers in an easy, intuitive, and configurable way. It features a modular architecture that allows for the seamless addition of new transfer protocols and language models (LLM) providers. Currently, it supports the HTTPStream and Stdio transport protocols for any OpenAI language model, with the capability to expand to more options in the future.

Installation

To install the MCP client, you can use pip:

pip install fastchat-mcp

LLM Implementation

LLM Providers

The client currently supports the following language models:

Provider Status Technical Description
OpenAI Implemented OpenAI is a leading provider of artificial intelligence-based language models that develop advanced technologies for automatic text processing and generation through models like GPT.

🚨 CONFIGURATION NOTE Currently, this project only work with OpenAI llm provider.

Default Provider (OpenAI): OpenAI is a leading provider of artificial intelligence-based language models that develop advanced technologies for automatic text processing and generation through models like GPT.

LLM Models

This project can use any valid OpenAI language model, providing flexibility to choose the model that best fits your specific needs. To explore all available models, their features, and how to use them, it is recommended to consult the official OpenAI documentation.

To select a model, you should create a chat instance like this:

from fastchat import Fastchat
chat = Fastchat(model="my-openai-model-name", ...)

Default Model ("gpt-5-nano"): GPT-5 Nano is the smallest and fastest version of the GPT-5 family, designed to deliver quick and accurate responses with ultra-low latency. It is optimized for simple tasks and processing large volumes of queries. Its focus is on speed and low cost, making it ideal for personal assistants, rapid translation, and lightweight applications, while maintaining basic reasoning capabilities and reliable text generation.

Implemented Transfer Protocols

Protocols for communication with MCP servers:

Protocol Status Technical Characteristics
stdio Implemented Standard input/output interface that facilitates direct communication between processes.
HTTPStream Implemented Asynchronous HTTP-based protocol that enables continuous data streaming.
SSE (Server-Sent Events) Not Implemented Unidirectional protocol that allows the server to send multiple updated events through a single HTTP connection.

🚨 CRITICAL CONFIGURATION NOTE Currently, this project don't work with SSE (Server-Sent Events) protocol.

System Requirements

Environmental Configuration

  • .env file: The .env file contains the authentication credentials necessary for integration with external services. This file must be created in the project root directory with the following format:

    # .env
    
    #CRIPTOGRAFY_KEY by token data storage (OAuth2)
    CRIPTOGRAFY_KEY=<any-criptografy-key>
    
    # OpenAI Authentication
    OPENAI_API_KEY=<your-openai-key>
    
  • fastchat.config.json file: The fastchat.config.json file defines the configuration of available MCP servers. It must be created in the project root directory with this structure

Dependencies

  • Python = ">=3.11"
  • openai = "^1.68.2"
  • mcp[cli]
  • mcp-oauth

File fastchat.config.json

This file defines the configuration of available MCP servers (Model Context Protocol) in the project. It must be placed in the root directory of the repository. Its main purpose is to inform the application which servers can be used and how to connect to them.

General Structure

The file is JSON formatted and follows this main structure:

{
    "app_name": "fastchat-mcp",
    "mcp_servers": {
    "..."
    }
}
  • app_name: The identifiable name of the appslication or project using these MCP servers.
  • mcp_servers: An object listing one or more configured MCP servers, each with its unique key.

Server Definition

Each MCP server inside "mcp_servers" has a custom configuration with these common properties:

  • Server key (e.g., "example_public_server", "github", etc.): internal name identifying this server.

  • protocol: Protocol or communication method. It can be:

    • "httpstream": Communication via HTTP streaming.
    • "stdio": Communication based on standard input/output (local command execution).

Server Configuration Examples

1. Public HTTP Stream Server

"example_public_server": {
    "protocol": "httpstream",
    "httpstream-url": "http://127.0.0.1:8000/public-example-server/mcp",
    "name": "example-public-server",
    "description": "Example public server."
}
  • httpstream-url: Base URL where the MCP HTTP streaming server is exposed.
  • No authentication required (public access).
  • "name" and "description" provide descriptive labels for users.

2. Private HTTP Stream Server with Authentication

"example_private_mcp": {
    "protocol": "httpstream",
    "httpstream-url": "http://127.0.0.1:8000/private-example-server/mcp",
    "name": "example-private-server",
    "description": "Example private server with oauth required.",
    "auth": {
        "required": true,
        "post_body": {
            "username": "user",
            "password": "password"
        }
    }
}
  • Adds an "auth" object on top of basic config:
    • required: true indicates authentication is needed.
    • post_body: Data sent for authentication (username and password here).
  • Suitable for servers secured with OAuth2.

3. GitHub Server with Authentication Headers

"github": {
    "protocol": "httpstream",
    "httpstream-url": "https://api.githubcopilot.com/mcp",
    "name": "github",
    "description": "This server specializes in github operations.",
    "headers": {
        "Authorization": "Bearer {your-github-access-token}"
    }
}
  • Uses a custom HTTP header "Authorization" for token-based authentication.
  • Perfect for sending API keys or tokens in headers to access the server.

4. Local Server using STDIO protocol

"my-stdio-server": {
    "protocol": "stdio",
    "name": "my-stdio-server",
    "config": {
        "command": "npx",
        "args": [
            "-y",
            "@modelcontextprotocol/example-stdio-server"
        ]
    }
}
  • Does not use HTTP; communication happens by executing local commands.
  • "config" specifies the command and arguments to run the MCP server. This key value(or body) has the same Claude Desktop sintaxis.
  • Useful for local integrations or development testing without networking.

Notes

see config.example.json

⚠️ Place this file in the project root so the application can detect it automatically.

💡 If you need an httpstream MCP server to test the code, you can use simple-mcp-server.

✍️ If you need help configuring a specific server or using this configuration in your code, feel free to open discussion for help!


Additional Configuration

System Prompts

As an advanced configuration, system prompts can be supplied to modify the behavior of responses. Prompts should be provided as lists; multiple system prompts can be supplied.

Args

  • extra_reponse_system_prompts: List of string prompts used as additional system prompts in the final responses.
  • extra_selection_system_prompts: List of string prompts used as additional system prompts for the resource/service selection step exposed by connected MCP servers.

Example:

chat = Fastchat(
    extra_reponse_system_prompts=[
        "You are an NPC street vendor for an RPG game. You must behave as such and respond according to your character. You specialize in selling medieval weaponry, such as swords, armor, shields, and more. Address anyone who speaks to you as if they were an adventurer in a medieval fantasy world."
    ]
)

See example here

Additional MCP Servers

In addition to the servers defined in the configuration file, you can pass extra MCP servers via parameters. These are provided as a dictionary with the same structure as the configuration file, under the key "mcp-servers".

Args

  • additional_servers: Additional servers to be supplied to the Fastchat component, following the same format as the configuration file, for example:
my_servers = {
  "github": {
    "protocol": "httpstream",
    "httpstream-url": "https://api.githubcopilot.com/mcp",
    "name": "github",
    "description": "This server specializes in github operations.",
    "headers": {
      "Authorization": "Bearer {your-github-token}"
    }
  },
  "other_server": {"...": "..."}
}
chat = Fastchat(additional_servers=my_servers)

Note: Servers defined in the .config file are concatenated with those passed as parameters; it is compatible to use both methods to add MCP servers.

API: The websocket exposed by the API supports additional servers passed through the additional_servers parameter.

Usage Example

#example1.py
from fastchat import TerminalChat
chat = TerminalChat()
chat.open()

https://github.com/user-attachments/assets/1fcb0db8-5798-4745-8711-4b93198e36cc

#example2.py
from fastchat import Fastchat
import asyncio

async def chating():
    chat: Fastchat = Fastchat()
    await chat.initialize()
    while True:
        query = input("> ")
        if query == "":
            break
        async for step in chat(query):
            print(f"<< {step.json}")
            
asyncio.run(chating())  

see more usage examples

Version History

Last Version Features

  • 💬 Fully functional streaming chat by passing a query; see Fastchat.

  • ⚙️ Integration with Tools, Resources, and Prompts from MCP servers, achieving a well-integrated client workflow with each of these services. Check flow

  • 🔐 Simple authentication system using mcp-oauth and this environmental configuration. Also integrate headers authorization.

  • 👾 OpenAI GPT as an integrated LLM using any valid OpenAI language model.

  • 📡 Support for the httpstream transport protocol.

  • 📟 Support for the stdio transport protocol.

  • 💻 Easy console usage via TerminalChat().open(); see example1 for the use case.

  • 💡 Response management and MCP service selection control through system prompts that can be passed to the chat. see example

See more in changelog

Project Status

⚠️ Important Notice: This project is currently in active development phase. As a result, errors or unexpected behaviors may occur during usage.

Future versions are expected to include additional features such as voice systems, quick integrations with databases, built-in websocket support for frontend connections, among other useful functionalities. We invite you to follow this repository (watch) to stay updated on the latest news and improvements implemented.

License

MIT License. See license


If you find this project helpful, please don’t forget to ⭐ star the repository

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastchat_mcp-1.1.1.tar.gz (32.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastchat_mcp-1.1.1-py3-none-any.whl (36.3 kB view details)

Uploaded Python 3

File details

Details for the file fastchat_mcp-1.1.1.tar.gz.

File metadata

  • Download URL: fastchat_mcp-1.1.1.tar.gz
  • Upload date:
  • Size: 32.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for fastchat_mcp-1.1.1.tar.gz
Algorithm Hash digest
SHA256 ea5a06e9b254fcdabcbc39901b4cb3a0f097407a7fae4b64fac1b9f7c425f168
MD5 986ed5cf6599072c5d40054c0351452e
BLAKE2b-256 43ab516c479d3ecd8ce936fc35e376be923cb6c09f661f74f0ea73905b3b9cbd

See more details on using hashes here.

File details

Details for the file fastchat_mcp-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: fastchat_mcp-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 36.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for fastchat_mcp-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 849a221288e1da52984b2b5c4b198f56d3004c5db57f06c3ec5d96e9faec5bc5
MD5 73eb4537423d6b960665fb0a89249f9e
BLAKE2b-256 3d613ed5f4a893ce6988428c18958577ad69e1b48d027f73f03e508c5f4de818

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page