Skip to main content

A Python library for function calling in LLMs

Project description

MFCS (Model Function Calling Standard)

Model Function Calling Standard

A Python library for handling function calling in Large Language Models (LLMs).

Features

  • Generate standardized function calling prompt templates
  • Parse function calls from LLM streaming output
  • Validate function schemas
  • Async streaming support with real-time processing
  • Multiple function call handling
  • Memory prompt management
  • Result prompt management

Installation

pip install mfcs

Configuration

  1. Copy .env.example to .env:
cp .env.example .env
  1. Edit .env and set your environment variables:
# OpenAI API Configuration
OPENAI_API_KEY=your-api-key-here
OPENAI_API_BASE=your-api-base-url-here

Example Installation

To run the example code, you need to install additional dependencies. The examples are located in the examples directory:

cd examples
pip install -r requirements.txt

Usage

1. Generate Function Calling Prompt Templates

from mfcs.function_prompt import FunctionPromptGenerator

# Define your function schemas
functions = [
    {
        "name": "get_weather",
        "description": "Get the current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA"
                },
                "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"],
                    "description": "The unit of temperature to use",
                    "default": "celsius"
                }
            },
            "required": ["location"]
        }
    }
]

# Generate prompt template
template = FunctionPromptGenerator.generate_function_prompt(functions)

2. Parse Function Calls from Output

from mfcs.response_parser import ResponseParser

# Example function call
output = """
I need to check the weather.

<mfcs_call>
<instructions>Getting weather information for New York</instructions>
<call_id>weather_1</call_id>
<name>get_weather</name>
<parameters>
{
  "location": "New York, NY",
  "unit": "fahrenheit"
}
</parameters>
</mfcs_call>
"""

# Parse the function call
parser = ResponseParser()
content, tool_calls, memory_calls = parser.parse_output(output)
print(f"Content: {content}")
print(f"Function calls: {tool_calls}")

3. Async Streaming Processing

from mfcs.response_parser import ResponseParser
from mfcs.result_manager import ResultManager
import json

async def process_stream():
    parser = ResponseParser()
    result_manager = ResultManager()
    
    async for delta, call_info, reasoning_content, usage in parser.parse_stream_output(stream):
        # Print reasoning content if present
        if reasoning_content:
            print(f"Reasoning: {reasoning_content}")
            
        # Print parsed content
        if delta:
            print(f"Content: {delta.content} (finish reason: {delta.finish_reason})")
            
        # Handle tool calls
        if call_info and isinstance(call_info, ToolCall):
            print(f"\nTool Call:")
            print(f"Instructions: {call_info.instructions}")
            print(f"Call ID: {call_info.call_id}")
            print(f"Name: {call_info.name}")
            print(f"Arguments: {json.dumps(call_info.arguments, indent=2)}")
            
            # Simulate tool execution (in real application, this would call actual tools)
            # Add API result with call_id (now required)
            result_manager.add_tool_result(
                name=call_info.name,
                result={"status": "success", "data": f"Simulated data for {call_info.name}"},
                call_id=call_info.call_id
            )
            
        # Print usage statistics if available
        if usage:
            print(f"Usage: {usage}")
    
    print("\nTool Results:")
    print(result_manager.get_tool_results())

4. Memory Prompt Management

from mfcs.memory_prompt import MemoryPromptGenerator

# Define memory APIs
memory_apis = [
    {
        "name": "store_preference",
        "description": "Store user preferences and settings",
        "parameters": {
            "type": "object",
            "properties": {
                "preference_type": {
                    "type": "string",
                    "description": "Type of preference to store"
                },
                "value": {
                    "type": "string",
                    "description": "Value of the preference"
                }
            },
            "required": ["preference_type", "value"]
        }
    }
]

# Generate memory prompt template
template = MemoryPromptGenerator.generate_memory_prompt(memory_apis)

5. Result Management System

The Result Management System provides a unified way to handle and format results from both tool calls and memory operations.

from mfcs.result_manager import ResultManager

# Initialize result manager
result_manager = ResultManager()

# Store tool call results
result_manager.add_tool_result(
    name="get_weather",           # Tool name
    result={"temperature": 25},   # Tool execution result
    call_id="weather_1"          # Unique identifier for this call
)

# Store memory operation results
result_manager.add_memory_result(
    name="store_preference",      # Memory operation name
    result={"status": "success"}, # Operation result
    memory_id="memory_1"         # Unique identifier for this operation
)

# Get formatted results for LLM consumption
tool_results = result_manager.get_tool_results()
# Output format:
# <tool_result>
# {call_id: weather_1, name: get_weather} {"temperature": 25}
# </tool_result>

memory_results = result_manager.get_memory_results()
# Output format:
# <memory_result>
# {memory_id: memory_1, name: store_preference} {"status": "success"}
# </memory_result>

Examples

Check out the examples directory for detailed examples:

  • function_calling_examples.py: Basic function calling examples

    • Function prompt generation
    • Function call parsing
    • Result management
  • async_function_calling_examples.py: Async streaming examples

    • Real-time streaming processing
    • Multiple function call handling
    • Usage statistics tracking
  • memory_function_examples.py: Memory management examples

    • Memory prompt generation
    • Memory operation handling
    • Memory result management
  • mcp_client_example.py: Model Control Protocol examples

    • MCP client implementation
    • MCP tool management
    • MCP response handling
  • async_mcp_client_example.py: Async MCP examples

    • Async MCP client implementation
    • Async MCP tool management
    • Async MCP response handling
  • memory_function_examples.py: Memory function examples

    • Memory prompt generation
    • Memory operations
    • Memory context management
  • async_memory_function_examples.py: Async memory examples

    • Async memory operations
    • Async memory context management
    • Async memory persistence

Run the examples to see the library in action:

# Run basic examples
python examples/function_calling_examples.py

# Run async examples
python examples/async_function_calling_examples.py

# Run MCP examples
python examples/mcp_client_example.py

# Run async MCP examples
python examples/async_mcp_client_example.py

# Run memory examples
python examples/memory_function_examples.py

# Run async memory examples
python examples/async_memory_function_examples.py

Notes

  • The library requires Python 3.8+ for async features
  • Make sure to handle API keys and sensitive information securely
  • For production use, replace simulated API calls with actual implementations
  • Follow the tool calling rules in the prompt template
  • Use unique call_ids for each function call
  • Provide clear instructions for each function call
  • Handle errors and resource cleanup in async streaming processing
  • Use ResultManager to manage results from multiple function calls
  • Handle exceptions and timeouts properly in async context
  • Use MemoryPromptManager for managing conversation context

System Requirements

  • Python 3.8 or higher

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mfcs-0.1.5.tar.gz (29.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mfcs-0.1.5-py3-none-any.whl (14.2 kB view details)

Uploaded Python 3

File details

Details for the file mfcs-0.1.5.tar.gz.

File metadata

  • Download URL: mfcs-0.1.5.tar.gz
  • Upload date:
  • Size: 29.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.11

File hashes

Hashes for mfcs-0.1.5.tar.gz
Algorithm Hash digest
SHA256 1d37f03de25ec6c6f4dee3ea1038a235c408c1a5167c61bd971f9b87a5bbd545
MD5 fe33f5ecef96364634bf5a570939ab87
BLAKE2b-256 8fc7063c5148bb2d3e24eb46c18647741b119dbf510041e6c8a86d38c7d30592

See more details on using hashes here.

File details

Details for the file mfcs-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: mfcs-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 14.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.11

File hashes

Hashes for mfcs-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 1290ad028f0abe0b204b29b70ce8700ce08cd258353317895c718911eebbf14f
MD5 5ab229f3040bac5bd4dbf35f6c77eb13
BLAKE2b-256 e3de95d85634da883dd881f053f46ab948492cbbc92a48804955ddfdd4201385

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page