MBX AI SDK
Project description
MBX AI
A Python library for building AI applications with LLMs.
Features
- OpenRouter Integration: Connect to various LLM providers through OpenRouter
- Intelligent Agent System: AgentClient with dialog-based thinking, question generation, and quality iteration
- Tool Integration: Easily integrate tools with LLMs using the Model Context Protocol (MCP)
- Structured Output: Get structured, typed responses from LLMs
- Chat Interface: Simple chat interface for interacting with LLMs
- FastAPI Server: Built-in FastAPI server for tool integration
Installation
pip install mbxai
Quick Start
Basic Usage
from mbxai import OpenRouterClient
# Initialize the client
client = OpenRouterClient(api_key="your-api-key")
# Chat with an LLM
response = await client.chat([
{"role": "user", "content": "Hello, how are you?"}
])
print(response.choices[0].message.content)
Quick Agent Example
from mbxai import AgentClient, OpenRouterClient
from pydantic import BaseModel, Field
class TravelPlan(BaseModel):
destination: str = Field(description="Travel destination")
activities: list[str] = Field(description="Recommended activities")
budget: str = Field(description="Estimated budget")
# Initialize agent
client = OpenRouterClient(token="your-api-key")
agent = AgentClient(client)
# Get intelligent response with automatic quality improvement
response = agent.agent(
prompt="Plan a weekend trip to a mountain destination",
final_response_structure=TravelPlan,
ask_questions=False
)
plan = response.final_response
print(f"Destination: {plan.destination}")
print(f"Activities: {', '.join(plan.activities)}")
Using Tools
from mbxai import OpenRouterClient, ToolClient
from pydantic import BaseModel
# Define your tool's input and output models
class CalculatorInput(BaseModel):
a: float
b: float
class CalculatorOutput(BaseModel):
result: float
# Create a calculator tool
async def calculator(input: CalculatorInput) -> CalculatorOutput:
return CalculatorOutput(result=input.a + input.b)
# Initialize the client with tools
client = ToolClient(OpenRouterClient(api_key="your-api-key"))
client.add_tool(calculator)
# Use the tool in a chat
response = await client.chat([
{"role": "user", "content": "What is 2 + 3?"}
])
print(response.choices[0].message.content)
Using MCP (Model Context Protocol)
from mbxai import OpenRouterClient, MCPClient
from mbxai.mcp import MCPServer
from mcp.server.fastmcp import FastMCP
from pydantic import BaseModel
# Define your tool's input and output models
class CalculatorInput(BaseModel):
a: float
b: float
class CalculatorOutput(BaseModel):
result: float
# Create a FastMCP instance
mcp = FastMCP("calculator-service")
# Create a calculator tool
@mcp.tool()
async def calculator(argument: CalculatorInput) -> CalculatorOutput:
return CalculatorOutput(result=argument.a + argument.b)
# Start the MCP server
server = MCPServer("calculator-service")
await server.add_tool(calculator)
await server.start()
# Initialize the MCP client
client = MCPClient(OpenRouterClient(api_key="your-api-key"))
await client.register_mcp_server("calculator-service", "http://localhost:8000")
# Use the tool in a chat
response = await client.chat([
{"role": "user", "content": "What is 2 + 3?"}
])
print(response.choices[0].message.content)
Using AgentClient (Intelligent Dialog System)
The AgentClient provides an intelligent dialog-based thinking process that can ask clarifying questions, iterate on responses, and provide structured outputs.
Basic Agent Usage
from mbxai import AgentClient, OpenRouterClient
from pydantic import BaseModel, Field
# Define your response structure
class BookRecommendation(BaseModel):
title: str = Field(description="The title of the recommended book")
author: str = Field(description="The author of the book")
genre: str = Field(description="The genre of the book")
reason: str = Field(description="Why this book is recommended")
# Initialize the agent
client = OpenRouterClient(token="your-api-key")
agent = AgentClient(client)
# Get a recommendation with questions
response = agent.agent(
prompt="I want a book recommendation",
final_response_structure=BookRecommendation,
ask_questions=True # Agent will ask clarifying questions
)
if response.has_questions():
# Display questions to user
for question in response.questions:
print(f"Q: {question.question}")
# Collect answers and continue
from mbxai import AnswerList, Answer
answers = AnswerList(answers=[
Answer(key="genre", answer="I love science fiction"),
Answer(key="complexity", answer="I prefer complex narratives")
])
# Continue the conversation
final_response = agent.answer_to_agent(response.agent_id, answers)
book_rec = final_response.final_response
print(f"Recommended: {book_rec.title} by {book_rec.author}")
else:
# Direct response without questions
book_rec = response.final_response
print(f"Recommended: {book_rec.title} by {book_rec.author}")
Agent with Tool Integration
from mbxai import AgentClient, ToolClient, OpenRouterClient
# Initialize with tool support
openrouter_client = OpenRouterClient(token="your-api-key")
tool_client = ToolClient(openrouter_client)
agent = AgentClient(tool_client)
# Register tools via the agent (schema auto-generated!)
def get_weather(location: str, unit: str = "fahrenheit") -> dict:
"""Get weather information for a location.
Args:
location: The city or location name
unit: Temperature unit (fahrenheit or celsius)
"""
return {"location": location, "temperature": "72°F", "conditions": "Sunny"}
agent.register_tool(
name="get_weather",
description="Get current weather for a location",
function=get_weather
# Schema automatically generated from function signature!
)
# Use agent with tools
class WeatherResponse(BaseModel):
location: str = Field(description="The location")
weather: str = Field(description="Weather description")
recommendations: list[str] = Field(description="Clothing recommendations")
response = agent.agent(
prompt="What's the weather in San Francisco and what should I wear?",
final_response_structure=WeatherResponse,
ask_questions=False
)
weather_info = response.final_response
print(f"Weather: {weather_info.weather}")
Agent Configuration
# Configure quality iterations (default: 2)
agent = AgentClient(
ai_client=openrouter_client,
max_iterations=3 # More iterations = higher quality, slower response
)
# Different configurations for different use cases:
# max_iterations=0: Fastest, basic quality (chatbots)
# max_iterations=1: Fast, good quality (content generation)
# max_iterations=2: Balanced (default, recommended)
# max_iterations=3+: Highest quality (analysis, reports)
Agent with MCP Client
from mbxai import AgentClient, MCPClient
# Initialize with MCP support
mcp_client = MCPClient(OpenRouterClient(token="your-api-key"))
agent = AgentClient(mcp_client)
# Register MCP servers
agent.register_mcp_server("data-analysis", "http://localhost:8000")
# Register individual tools
agent.register_tool("analyze_data", "Analyze dataset", analyze_function, schema)
# Use agent with full MCP capabilities
response = agent.agent(
prompt="Analyze the sales data and provide insights",
final_response_structure=AnalysisReport,
ask_questions=True
)
Agent Features
- Intelligent Questions: Automatically generates clarifying questions when needed
- Quality Iteration: Improves responses through multiple AI review cycles
- Tool Integration: Seamlessly works with ToolClient and MCPClient
- Structured Output: Always returns properly typed Pydantic models
- Session Management: Handles multi-turn conversations with question/answer flow
- Configurable: Adjust quality vs speed with max_iterations parameter
Supported AI Clients
| Client | Structured Responses | Tool Registration | MCP Server Registration |
|---|---|---|---|
| OpenRouterClient | ✅ | ❌ | ❌ |
| ToolClient | ✅ | ✅ | ❌ |
| MCPClient | ✅ | ✅ | ✅ |
Development
Setup
- Clone the repository:
git clone https://github.com/yourusername/mbxai.git
cd mbxai
- Create a virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
- Install dependencies:
pip install -e ".[dev]"
Running Tests
pytest tests/
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
mbxai-2.0.2.tar.gz
(44.7 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
mbxai-2.0.2-py3-none-any.whl
(55.4 kB
view details)
File details
Details for the file mbxai-2.0.2.tar.gz.
File metadata
- Download URL: mbxai-2.0.2.tar.gz
- Upload date:
- Size: 44.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5280edf9d708625aebbc2aa8b94cfd0f0b62a051f7b01579172be5bac3867daf
|
|
| MD5 |
213ddd6a8a5d03a75b22ee88cd367132
|
|
| BLAKE2b-256 |
d110ba8fe84dfc2cc994909bdb5ee31ff971e4421a266407e5f02fc30e342384
|
File details
Details for the file mbxai-2.0.2-py3-none-any.whl.
File metadata
- Download URL: mbxai-2.0.2-py3-none-any.whl
- Upload date:
- Size: 55.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eec1a71b2c69f1fbedeff7e539d3951b247748e5e91e92d405f37df8e8f3505a
|
|
| MD5 |
954f4f90b35712e8c9389b5bd4716abe
|
|
| BLAKE2b-256 |
05942264253f5856ad160cabd70948e95978b6c6dac4fe59bf50acaaa77983c3
|