Skip to main content

A lightweight agentic AI framework with MCP tool serving

Project description

Agentic AI MCP

Lightweight agentic AI with MCP tools. Supports multiple LLM providers (Anthropic, OpenAI), multi-agent orchestration, and distributed setups where tools run on one machine and agents on another.

Install

pip install agentic-ai-mcp

Setup

Set your API key in .env file (only needed on the client/agent machine):

# For Anthropic (default)
ANTHROPIC_API_KEY=sk-ant-...

# For OpenAI
OPENAI_API_KEY=sk-...

Quick Start

See the example notebooks:

Usage

Server (expose tools)

Run this on the machine where you want to host tools:

from agentic_ai_mcp import AgenticAIServer

def add(a: int, b: int) -> int:
    """Add two numbers."""
    return a + b

def greet(name: str, times: int = 1) -> str:
    """Greet someone."""
    return ("Hello, " + name + "! ") * times

# Create server and register tools
server = AgenticAIServer(host="0.0.0.0", port=8888)
server.register_tool(add)
server.register_tool(greet)

print(f"Tools: {server.tools}")
print(f"URL: {server.mcp_url}")

# Start server in background
server.start()

# ... do other things ...

# Stop when done
server.stop()

Client (run agents)

Run this on another machine to connect to the server and execute agents:

from agentic_ai_mcp import AgenticAIClient

# Connect to MCP server
client = AgenticAIClient(mcp_url="http://<server-ip>:8888/mcp")

# Simple agent workflow
result = await client.run("Calculate 2+1, then greet 'Alice' that many times.")
print(result)

# Planning-based workflow for complex tasks
result = await client.run_with_planning("Calculate ((0+2) + (1+1) + 1), then greet 'Bob' that many times.")
print(result)

Multiple MCP Servers

Connect to tools spread across multiple servers:

from agentic_ai_mcp import AgenticAIClient

# Connect to multiple MCP servers
client = AgenticAIClient(
    mcp_urls=[
        "http://<server-1>:8888/mcp",  # math tools
        "http://<server-2>:9999/mcp",  # greeting tools
    ]
)

# The agent can use tools from all servers
result = await client.run("Calculate 2+3, then greet 'Alice' that many times.")
print(result)

Multi-Agent Orchestration

Use AgenticAIOrchestrator to coordinate multiple agents working together on a task. Each agent can have a role and a tool_filter to specialize its behavior.

Sequential Flow

Agents run one after another. Each agent sees the previous agent's output as context:

from agentic_ai_mcp import AgenticAIClient, AgenticAIOrchestrator

researcher = AgenticAIClient(
    mcp_url="http://<server-ip>:8888/mcp",
    role="researcher",
    tool_filter=["search"],  # only load the 'search' tool
)

writer = AgenticAIClient(
    mcp_url="http://<server-ip>:8888/mcp",
    role="writer",
    tool_filter=["write_file"],
)

orchestrator = AgenticAIOrchestrator(
    clients=[researcher, writer],
    flow_type="sequential",
)

result = await orchestrator.run("Research Python best practices and write a summary.")
print(result)

Parallel Flow

Agents run concurrently and their results are combined:

orchestrator = AgenticAIOrchestrator(
    clients=[agent_a, agent_b],
    flow_type="parallel",
)

result = await orchestrator.run("Analyze this dataset from two perspectives.")
print(result)

With a Synthesizer

Add a synthesizer agent that runs last to combine all results into a final output:

from agentic_ai_mcp import AgenticAIClient, AgenticAIOrchestrator

researcher = AgenticAIClient(mcp_url=MCP_URL, role="researcher")
analyst = AgenticAIClient(mcp_url=MCP_URL, role="analyst")
synthesizer = AgenticAIClient(mcp_url=MCP_URL, role="synthesizer")

orchestrator = AgenticAIOrchestrator(
    clients=[researcher, analyst],
    flow_type="parallel",
    synthesizer=synthesizer,
)

result = await orchestrator.run("Evaluate the pros and cons of microservices.")
print(result)

With Planning

The orchestrator also supports the planning workflow on each agent:

result = await orchestrator.run_with_planning("Complex multi-step task...")

Shared State

When orchestrating multiple agents, use SharedState to share data between them:

from agentic_ai_mcp import AgenticAIOrchestrator, SharedState

state = SharedState({"project": "demo"})

orchestrator = AgenticAIOrchestrator(
    clients=[agent_a, agent_b],
    shared_state=state,
)

result = await orchestrator.run("Do the task.")

# After execution, agent results are stored in shared state
print(state.get("agent_results"))
print(state.to_dict())

Using OpenAI

from agentic_ai_mcp import AgenticAIClient

# Use OpenAI instead of Anthropic
client = AgenticAIClient(
    mcp_url="http://<server-ip>:8888/mcp",
    provider="openai",
    model="gpt-4o-mini"
)

result = await client.run("Calculate -1+2")

Passing API Key Directly

from agentic_ai_mcp import AgenticAIClient

# Pass API key directly (instead of using .env)
client = AgenticAIClient(
    mcp_url="http://<server-ip>:8888/mcp",
    api_key="sk-ant-..."
)

Synchronous Usage

All async methods have synchronous variants for use outside of async contexts:

# Single client
result = client.run_sync("Calculate 2+3")
result = client.run_with_planning_sync("Complex task...")

# Orchestrator
result = orchestrator.run_sync("Do the task.")
result = orchestrator.run_with_planning_sync("Complex task...")

API Reference

AgenticAIServer

Property/Method Description
server.tools List of registered tool names
server.mcp_url Server URL
server.is_running Check if server is running
server.register_tool(func) Register a function as an MCP tool
server.start() Start MCP server in background
server.stop() Stop MCP server

AgenticAIClient

Property/Method Description
client.mcp_url Primary MCP server URL (first in the list)
client.mcp_urls List of all MCP server URLs
client.tools List of loaded tool names from all servers
client.role Agent's role description (used by orchestrator)
client.shared_state Shared state dict (set by orchestrator, None if standalone)
client.run(prompt) Simple agentic workflow (async)
client.run_with_planning(prompt) Planning-based workflow (async)
client.run_sync(prompt) Simple agentic workflow (sync)
client.run_with_planning_sync(prompt) Planning-based workflow (sync)

AgenticAIOrchestrator

Property/Method Description
orchestrator.clients List of orchestrated clients
orchestrator.shared_state The SharedState instance
orchestrator.run(prompt) Run with simple ReAct on each client (async)
orchestrator.run_with_planning(prompt) Run with planning on each client (async)
orchestrator.run_sync(prompt) Synchronous version of run()
orchestrator.run_with_planning_sync(prompt) Synchronous version of run_with_planning()

SharedState

Property/Method Description
state.get(key, default) Get a value from shared state
state.set(key, value) Set a value in shared state
state.to_dict() Return a copy of the full state
key in state Check if a key exists

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentic_ai_mcp-0.6.4.tar.gz (26.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentic_ai_mcp-0.6.4-py3-none-any.whl (25.6 kB view details)

Uploaded Python 3

File details

Details for the file agentic_ai_mcp-0.6.4.tar.gz.

File metadata

  • Download URL: agentic_ai_mcp-0.6.4.tar.gz
  • Upload date:
  • Size: 26.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agentic_ai_mcp-0.6.4.tar.gz
Algorithm Hash digest
SHA256 5ece8204f1204631df7f85aa1c5546c4921f547f48926d4fae52a398e847f68b
MD5 07e42d494e7c7381d2c77ec36e4d1520
BLAKE2b-256 51ead60bc6dc0d42aff1039e9e8d69ca788835754aeaa22ed850f7be4ef7ceb3

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentic_ai_mcp-0.6.4.tar.gz:

Publisher: cd.yml on hasanjawad001/agentic-ai-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agentic_ai_mcp-0.6.4-py3-none-any.whl.

File metadata

  • Download URL: agentic_ai_mcp-0.6.4-py3-none-any.whl
  • Upload date:
  • Size: 25.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agentic_ai_mcp-0.6.4-py3-none-any.whl
Algorithm Hash digest
SHA256 62c55c3879532a152d6da2effc73b309ddfe6a8893b4a7b6c0e61bd5311acf55
MD5 eeae5bbfbd92d512d49ed8e5fe246aba
BLAKE2b-256 a4b6aeb500501ca7007b6db423d391d26085fc2b4e083849f8803b6babd71f4a

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentic_ai_mcp-0.6.4-py3-none-any.whl:

Publisher: cd.yml on hasanjawad001/agentic-ai-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page