Skip to main content

A robust integration engine that enhances communication between LLMs and MCP servers/functions with validation, retries, and safety.

Project description

llm_to_mcp_integration_engine

🔍 What is llm_to_mcp_integration_engine?

llm_to_mcp_integration_engine is a new idea for a communication layer between LLMs and MCP servers or functions.

It enhances the reliability of tool calling by ensuring tools are selected, validated, and executed correctly before triggering any external process.

It searches for tool selection indicators (SELECTED_TOOLS, SELECTED_TOOL, NO_TOOLS_SELECTED) in the LLM's response and validates them against a predefined tool list.


🚀 What is new about llm_to_mcp_integration_engine?

The llm_to_mcp_integration_engine distinguishes itself by effectively handling unstructured outputs and incorporating dynamic parsing and retry mechanisms(RETRY_PROMPT,CHANGE_LLM_IN_RETRY), offering a more flexible and resilient solution for LLM-tool integration.


❓ Why do we need llm_to_mcp_integration_engine?

  • LLMs often misformat or misorder tool calls, leading to failures.
  • Tool execution must be validated before triggering any MCP server or function.
  • This protocol brings clarity, control, and reliability to LLM-tool integrations.

❌ Is there an existing communication layer?

No.
This is a novel invention. We introduced the LLM2MCP protocol, a first-of-its-kind communication framework that connects LLMs to MCP servers or functions in a structured, validated, and controllable way.

What makes it new:

  • Dual Registration: Tools/functions are listed in both the LLM prompt and the engine, ensuring alignment and consistency.
  • Non-JSON Tolerance: Even when the LLM response is not fully JSON, the engine can still extract valid tool selections using regex and logic-based checks.
  • Retry Framework: If validation fails (missing tools, incorrect formats, etc.), the engine can retry with a new prompt or even switch to a different LLM.
  • Fine-Grained Failure Detection: Developers can diagnose exactly where the LLM fails — whether in selecting the right tool, formatting parameters, or transitioning to tool execution.
  • Execution Safety: The engine ensures no tool or MCP server is called unless the response is valid and verified.

This bundling of validation, fallbacks, control logic, and robustness into a single integration engine is what makes it a new invention.


⚙️ How to Use It

📦 Install via pip

pip install llm_to_mcp_integration_engine

✅ Default Usage

from llm_to_mcp_integration_engine import llm_to_mcp_integration_default

llm_to_mcp_integration_default(
    tools_list=my_tools_list,
    llm_respons=response_from_llm,
    json_validation=True
)

🔧 Advanced Usage

from llm_to_mcp_integration_engine import llm_to_mcp_integration_advance

llm_to_mcp_integration_advance(
    tools_list=my_tools_list,
    llm_respons=response_from_llm,
    json_validation=True,
    no_tools_selected=True,
    multi_stage_tools_select=True
)

🧠 Custom Usage (e.g., for agentic HTML/CSS tools)

from llm_to_mcp_integration_engine import llm_to_mcp_integration_custom

llm_to_mcp_integration_custom(
    tools_list=my_tools_list,
    llm_respons=response_from_llm,
    json_validation=True
)

✅ Benefits of Using llm_to_mcp_integration_engine

  • Flexible Response Handling
  • Reliable Tool Execution
  • Reliable Programmatic Validation
  • Improved Tool Chaining
  • Synergy with Reasoning Techniques (e.g., Chain-of-Thought)
  • Handles "No Tools Needed" Scenarios
  • Error Detection and Retry Mechanism
  • Failure Diagnostics & Monitoring
  • Cost Optimization via Tiered LLM Usage
  • Standardization of LLM-to-Tool Interfaces

💡 Also includes dynamic LLM switching on failure for enhanced robustness and cost-efficiency.


📜 License

You are free to use this engine for personal and research purposes.
However, you are not allowed to modify or distribute it without explicit permission from the author.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_to_mcp_integration_engine-0.1.0.tar.gz (20.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_to_mcp_integration_engine-0.1.0-py3-none-any.whl (15.8 kB view details)

Uploaded Python 3

File details

Details for the file llm_to_mcp_integration_engine-0.1.0.tar.gz.

File metadata

File hashes

Hashes for llm_to_mcp_integration_engine-0.1.0.tar.gz
Algorithm Hash digest
SHA256 3d23e6cc60a49f0433292d8bb80537ddabff7a85b6e40daf13c1bc325621a148
MD5 2db7dc1837a028bf133557bc9ef11e74
BLAKE2b-256 21dadc0be75a3f7e73b2af44f34e8f422f5c72442e791b54e9094fee58abb8f1

See more details on using hashes here.

File details

Details for the file llm_to_mcp_integration_engine-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_to_mcp_integration_engine-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ff5017b623b98b44b28c22775df76aeb47028179529c76dd3ffc88e2c17c88dc
MD5 2f004ea917374b6459d32e3d0bf14baf
BLAKE2b-256 e4200afbbce99717daeba79af055bbc62692f2af4dc5ace06590ac999c23a2f9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page