Simple building blocks for agentic AI systems with MCP client and conversation management
Project description
Agentic Blocks
Building blocks for agentic systems with a focus on simplicity and ease of use.
Overview
Agentic Blocks provides clean, simple components for building AI agent systems, specifically focused on:
- MCP Client: Connect to Model Control Protocol (MCP) endpoints with a sync-by-default API
- Messages: Manage LLM conversation history with OpenAI-compatible format
- LLM: Simple function for calling OpenAI-compatible completion APIs
All components follow principles of simplicity, maintainability, and ease of use.
Installation
pip install -e .
For development:
pip install -e ".[dev]"
Quick Start
MCPClient - Connect to MCP Endpoints
The MCPClient provides a unified interface for connecting to different types of MCP endpoints:
from agentic_blocks import MCPClient
# Connect to an SSE endpoint (sync by default)
client = MCPClient("https://example.com/mcp/server/sse")
# List available tools
tools = client.list_tools()
print(f"Available tools: {len(tools)}")
# Call a tool
result = client.call_tool("search", {"query": "What is MCP?"})
print(result)
Supported endpoint types:
- SSE endpoints: URLs with
/ssein the path - HTTP endpoints: URLs with
/mcpin the path - Local scripts: File paths to Python MCP servers
Async support for advanced users:
# Async versions available
tools = await client.list_tools_async()
result = await client.call_tool_async("search", {"query": "async example"})
Messages - Manage Conversation History
The Messages class helps build and manage LLM conversations in OpenAI-compatible format:
from agentic_blocks import Messages
# Initialize with system prompt
messages = Messages(
system_prompt="You are a helpful assistant.",
user_prompt="Hello, how can you help me?",
add_date_and_time=True
)
# Add assistant response
messages.add_assistant_message("I can help you with various tasks!")
# Add tool calls
tool_call = {
"id": "call_123",
"type": "function",
"function": {"name": "get_weather", "arguments": '{"location": "Paris"}'}
}
messages.add_tool_call(tool_call)
# Add tool response
messages.add_tool_response("call_123", "The weather in Paris is sunny, 22°C")
# Get messages for LLM API
conversation = messages.get_messages()
# View readable format
print(messages)
LLM - Call OpenAI-Compatible APIs
The call_llm function provides a simple interface for calling LLM completion APIs:
from agentic_blocks import call_llm, Messages
# Method 1: Using with Messages object
messages = Messages(
system_prompt="You are a helpful assistant.",
user_prompt="What is the capital of France?"
)
response = call_llm(messages, temperature=0.7)
print(response) # "The capital of France is Paris."
# Method 2: Using with raw message list
messages_list = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
response = call_llm(messages_list, model="gpt-4o-mini")
print(response) # "2+2 equals 4."
# Method 3: Using with tools (for function calling)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}
]
messages = Messages(user_prompt="What's the weather like in Stockholm?")
response = call_llm(messages, tools=tools)
print(response)
Environment Setup:
Create a .env file in your project root:
OPENAI_API_KEY=your_api_key_here
Or pass the API key directly:
response = call_llm(messages, api_key="your_api_key_here")
Complete Example - Tool Calling with Weather API
This example demonstrates a complete workflow using function calling with an LLM. For a full interactive notebook version, see notebooks/agentic_example.ipynb.
from agentic_blocks import call_llm, Messages
# Define tools in OpenAI function calling format
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather information for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "calculate",
"description": "Perform a mathematical calculation",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
}
]
# Create conversation with system and user prompts
messages = Messages(
system_prompt="You are a helpful assistant with access to weather and calculation tools.",
user_prompt="What is the weather in Stockholm?"
)
# Call LLM with tools - it will decide which tools to call
model = "gpt-4o-mini" # or your preferred model
response = call_llm(model=model, messages=messages, tools=tools)
# Add the LLM's response (including any tool calls) to conversation
messages.add_response_message(response)
# Display the conversation so far
for message in messages.get_messages():
print(message)
# Check if there are pending tool calls that need execution
print("Has pending tool calls:", messages.has_pending_tool_calls())
# In a real implementation, you would:
# 1. Execute the actual tool calls (get_weather, calculate, etc.)
# 2. Add tool responses using messages.add_tool_response()
# 3. Call the LLM again to get the final user-facing response
Expected Output:
{'role': 'system', 'content': 'You are a helpful assistant with access to weather and calculation tools.'}
{'role': 'user', 'content': 'What is the weather in Stockholm?'}
{'role': 'assistant', 'content': '', 'tool_calls': [{'id': 'call_abc123', 'type': 'function', 'function': {'name': 'get_weather', 'arguments': '{"location": "Stockholm, Sweden", "unit": "celsius"}'}}]}
Has pending tool calls: True
Key Features Demonstrated:
- Messages management: Clean conversation history with system/user prompts
- Tool calling: LLM automatically decides to call the
get_weatherfunction - Response handling:
add_response_message()handles both content and tool calls - Pending detection:
has_pending_tool_calls()identifies when tools need execution
Next Steps: After the LLM makes tool calls, you would implement the actual tool functions and continue the conversation:
# Implement actual weather function
def get_weather(location, unit="celsius"):
# Your weather API implementation here
return f"The weather in {location} is sunny, 22°{unit[0].upper()}"
# Execute pending tool calls
if messages.has_pending_tool_calls():
last_message = messages.get_messages()[-1]
for tool_call in last_message.get("tool_calls", []):
if tool_call["function"]["name"] == "get_weather":
import json
args = json.loads(tool_call["function"]["arguments"])
result = get_weather(**args)
messages.add_tool_response(tool_call["id"], result)
# Get final response from LLM
final_response = call_llm(model=model, messages=messages)
messages.add_assistant_message(final_response)
print(f"Final response: {final_response}")
Development Principles
This project follows these core principles:
- Simplicity First: Keep code simple, readable, and focused on core functionality
- Sync-by-Default: Primary methods are synchronous for ease of use, with optional async versions
- Minimal Dependencies: Avoid over-engineering and complex error handling unless necessary
- Clean APIs: Prefer straightforward method names and clear parameter expectations
- Maintainable Code: Favor fewer lines of clear code over comprehensive edge case handling
API Reference
MCPClient
MCPClient(endpoint: str, timeout: int = 30)
Methods:
list_tools() -> List[Dict]: Get available tools (sync)call_tool(name: str, args: Dict) -> Dict: Call a tool (sync)list_tools_async() -> List[Dict]: Async version of list_toolscall_tool_async(name: str, args: Dict) -> Dict: Async version of call_tool
Messages
Messages(system_prompt=None, user_prompt=None, add_date_and_time=False)
Methods:
add_system_message(content: str): Add system messageadd_user_message(content: str): Add user messageadd_assistant_message(content: str): Add assistant messageadd_tool_call(tool_call: Dict): Add tool call to assistant messageadd_tool_calls(tool_calls): Add multiple tool calls from ChatCompletionMessageFunctionToolCall objectsadd_response_message(message): Add ChatCompletionMessage response to conversationadd_tool_response(call_id: str, content: str): Add tool responseget_messages() -> List[Dict]: Get all messageshas_pending_tool_calls() -> bool: Check for pending tool calls
call_llm
call_llm(messages, tools=None, api_key=None, model="gpt-4o-mini", **kwargs) -> str
Parameters:
messages: Either aMessagesinstance or list of message dictionariestools: Optional list of tools in OpenAI function calling formatapi_key: OpenAI API key (defaults to OPENAI_API_KEY from .env)model: Model name to use for completion**kwargs: Additional parameters passed to OpenAI API (temperature, max_tokens, etc.)
Returns: The assistant's response content as a string
Requirements
- Python >= 3.11
- Dependencies:
mcp,requests,python-dotenv,openai
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentic_blocks-0.1.33.tar.gz.
File metadata
- Download URL: agentic_blocks-0.1.33.tar.gz
- Upload date:
- Size: 58.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
946d6e597b11ffd9bf232cd352463286a379cec28fb2c2657f50a5ad36011835
|
|
| MD5 |
5ba90f63d69bfa20d354161f7c57241f
|
|
| BLAKE2b-256 |
c9dccc05c5125408fd68c8478418e737c70f58dade3a397c109b87dd852c4ce4
|
File details
Details for the file agentic_blocks-0.1.33-py3-none-any.whl.
File metadata
- Download URL: agentic_blocks-0.1.33-py3-none-any.whl
- Upload date:
- Size: 62.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cfe2ebe0aee2b3ceadcba044d39aad63095cde945bb527a9acb6130f4568a3f5
|
|
| MD5 |
5153f38e577e6ec29b01075e0b0cb917
|
|
| BLAKE2b-256 |
75bdce2451ea46490772ee5eaaf8a229965c67871df12cabaaffcdedb061f346
|