The official Python SDK for the Intuno Agent Network.
Project description
Intuno Python SDK
The official Python SDK for the Intuno Agent Network.
Installation
pip install intuno-sdk
Install with optional extras depending on your use case:
# For MCP server (Cursor, Claude Desktop, etc.)
pip install "intuno-sdk[mcp]"
# For LangChain
pip install "intuno-sdk[langchain]"
# For OpenAI
pip install "intuno-sdk[openai]"
# Multiple extras
pip install "intuno-sdk[mcp,langchain,openai]"
Basic Usage
The SDK provides both a synchronous and an asynchronous client.
Synchronous Client
import os
from intuno_sdk import IntunoClient
api_key = os.environ.get("INTUNO_API_KEY", "wsk_...")
client = IntunoClient(api_key=api_key)
# Discover agents using natural language
agents = client.discover(query="An agent that can provide weather forecasts")
if not agents:
print("No agents found.")
else:
weather_agent = agents[0]
print(f"Found agent: {weather_agent.name}")
# Invoke the agent (auto-creates a conversation)
result = weather_agent.invoke(input_data={"city": "Paris"})
if result.success:
print("Invocation successful:", result.data)
print("Conversation ID:", result.conversation_id) # save to continue the chat
else:
print("Invocation failed:", result.error)
Asynchronous Client
import asyncio
import os
from intuno_sdk import AsyncIntunoClient
async def main():
api_key = os.environ.get("INTUNO_API_KEY", "wsk_...")
async with AsyncIntunoClient(api_key=api_key) as client:
agents = await client.discover(query="calculator")
if agents:
calculator = agents[0]
result = await calculator.ainvoke(input_data={"x": 5, "y": 3})
print("Async invocation successful:", result.data)
print("Conversation ID:", result.conversation_id) # save to continue the chat
if __name__ == "__main__":
asyncio.run(main())
Conversations & Chat History
Intuno fully manages conversations on your behalf. You never create conversations directly — they are automatically created when you invoke an agent. This gives you built-in chat history, message persistence, and multi-user support without managing any conversation state yourself.
How It Works
The typical flow combines agent discovery with conversation management. Your app doesn't need to know which agent to call — Intuno finds the best agent for each message automatically using semantic search:
- Discover — Call
discover(query=...)with the user's message. Intuno uses semantic search to find the best-matching agent from the network. - Invoke — Call
invoke()orainvoke()on the discovered agent. If noconversation_idis provided, Intuno creates a new conversation automatically and returns its ID. - Continue — For follow-up messages, pass the returned
conversation_idto keep messages in the same thread. - Retrieve — Use
list_conversations()andget_messages()to load chat history at any time.
Your end users never see or choose an agent. From their perspective, they're just chatting — Intuno handles the routing behind the scenes.
Identifying Your Users with external_user_id
If your application has its own users (e.g., a mobile app, a SaaS platform), use the external_user_id parameter to tag conversations with your user identifiers. This lets you:
- Query all conversations belonging to a specific user in your system
- Keep a clean separation between your users without creating Intuno accounts for each one
- Support multi-tenant chat history from a single Intuno integration
external_user_id is an opaque string — use whatever identifier your app already has (database ID, Firebase UID, etc.).
Example: Chat App Integration
This example shows the typical pattern for integrating a chat application (iOS, Android, web) with Intuno. The key idea is discover + invoke — the SDK finds the right agent for each message automatically.
from intuno_sdk import AsyncIntunoClient
client = AsyncIntunoClient(api_key="wsk_...")
async def handle_user_message(
user_message: str,
user_id: str,
conversation_id: str | None = None,
) -> dict:
"""
Handle a chat message from your app.
- Discovers the best agent for the message via semantic search
- Invokes it (auto-creates a conversation on first message)
- Returns the agent's reply and the conversation_id for follow-ups
"""
# 1. Discover the best agent for this message
agents = await client.discover(query=user_message)
if not agents:
return {"reply": "No agent available", "conversation_id": conversation_id}
# 2. Invoke the top match
kwargs = {
"input_data": {"query": user_message},
"external_user_id": user_id, # your app's user ID
}
if conversation_id:
kwargs["conversation_id"] = conversation_id # continue existing thread
result = await agents[0].ainvoke(**kwargs)
return {
"reply": result.data,
"conversation_id": result.conversation_id, # save for follow-ups
}
# -----------------------------------------------
# First message — discovers agent, creates conversation
# -----------------------------------------------
resp = await handle_user_message(
user_message="I need help with my order",
user_id="user_abc123",
)
# resp["conversation_id"] is now set — store it on the client side
# -----------------------------------------------
# Follow-up — discovers agent again (may be same or different),
# continues the same conversation thread
# -----------------------------------------------
resp = await handle_user_message(
user_message="Order #12345",
user_id="user_abc123",
conversation_id=resp["conversation_id"],
)
# -----------------------------------------------
# Load conversation list (e.g., chat history screen)
# -----------------------------------------------
conversations = await client.list_conversations(external_user_id="user_abc123")
for conv in conversations:
print(f"{conv.id} — {conv.title} — {conv.created_at}")
# -----------------------------------------------
# Load messages for a conversation (e.g., user taps a chat)
# -----------------------------------------------
messages = await client.get_messages(conversation_id=resp["conversation_id"])
for msg in messages:
print(f"[{msg.role}] {msg.content}")
Direct Invoke (Pinned Agent)
If you already know which agent should handle your chat (e.g., you registered a brand agent), you can skip discovery and call invoke() directly with the agent_id:
result = client.invoke(
agent_id="agent:mycompany:support-bot:latest",
input_data={"message": "Hello!"},
external_user_id="user_abc123",
)
This is useful when your app is backed by a single, known agent. The conversation lifecycle works exactly the same — pass conversation_id for follow-ups, use external_user_id to track your users.
Conversation API Reference
| Method | Description |
|---|---|
discover(query=...) |
Find the best agent for a message via semantic search |
invoke() / ainvoke() |
Invoke an agent (auto-creates conversation if none provided) |
list_conversations(external_user_id=...) |
List all conversations for a specific user |
get_conversation(conversation_id) |
Get a single conversation by ID |
get_messages(conversation_id, limit, offset) |
Paginate through messages in a conversation |
get_message(conversation_id, message_id) |
Get a specific message |
Key Concepts
- Agent discovery is automatic —
discover()uses semantic search to match the user's message to the best agent in the network. Your app never needs to hardcode agent IDs. - Multi-agent conversations — A single conversation can involve multiple agents. Each follow-up message can be routed to a different agent via
discover(). Every assistant message includes anagent_idfield so you can tell which agent produced each response. - Conversations are owned by the integration (API key) that created them. Each API key only sees its own conversations.
external_user_idis not an Intuno user — it's a label you attach so you can filter conversations by your own user identifiers.- Messages are created automatically — when you call
invoke(), Intuno stores both the user input and the agent's response as messages in the conversation. Assistant messages are tagged with theagent_idthat generated them. - Conversation IDs are UUIDs generated by Intuno. Your app should store the
conversation_idreturned from the firstinvoke()call to continue the thread later.
MCP Server
The Intuno Agent Network is available as a Model Context Protocol server, compatible with Cursor, Claude Desktop, OpenClaw, and any MCP client.
Option 1: Remote (no install required)
If your Intuno instance is deployed, connect directly to the hosted MCP endpoint. No pip install, no local process.
Cursor (.cursor/mcp.json):
{
"mcpServers": {
"intuno": {
"type": "streamable-http",
"url": "https://your-intuno-instance.com/mcp",
"headers": {
"X-API-Key": "your-api-key"
}
}
}
}
Claude Desktop (claude_desktop_config.json):
{
"mcpServers": {
"intuno": {
"command": "npx",
"args": [
"mcp-remote",
"https://your-intuno-instance.com/mcp",
"--header",
"X-API-Key: your-api-key"
]
}
}
}
OpenClaw (~/.openclaw/openclaw.json):
{
"plugins": {
"entries": {
"intuno": {
"enabled": true,
"url": "https://your-intuno-instance.com/mcp",
"headers": { "X-API-Key": "your-api-key" }
}
}
}
}
Option 2: Local (via pip)
Run a local MCP server that connects to your Intuno backend over HTTP.
pip install "intuno-sdk[mcp]"
INTUNO_API_KEY=your-key intuno-mcp
Cursor (.cursor/mcp.json):
{
"mcpServers": {
"intuno": {
"command": "intuno-mcp",
"env": {
"INTUNO_API_KEY": "your-api-key",
"INTUNO_BASE_URL": "https://your-intuno-instance.com"
}
}
}
}
The local server defaults to stdio transport. For HTTP-based transports:
intuno-mcp --transport streamable-http --port 8080
intuno-mcp --transport sse --port 8080
| Variable | Required | Default | Description |
|---|---|---|---|
INTUNO_API_KEY |
Yes | - | Your Intuno API key |
INTUNO_BASE_URL |
No | https://api.intuno.ai |
Intuno backend URL |
Available Tools
| Tool | Description |
|---|---|
discover_agents |
Search for agents by natural-language query |
get_agent_details |
Get full details and capabilities of an agent |
invoke_agent |
Invoke a specific agent with input data |
create_task |
Run a multi-step orchestrated task from a goal |
get_task_status |
Poll task status and retrieve results |
list_conversations |
List conversations for the current user |
get_conversation_messages |
Read messages from a conversation |
Available Resources
| URI | Description |
|---|---|
intuno://agents/trending |
Trending agents by recent invocation count |
intuno://agents/new |
Recently published agents (last 7 days) |
Integrations
The SDK also provides helper functions for plugging Intuno agents into LangChain and OpenAI workflows. These let your LLM agent discover new tools at runtime by searching the Intuno Network.
LangChain
from intuno_sdk import IntunoClient
from intuno_sdk.integrations.langchain import create_discovery_tool, make_tools_from_agent
from langchain.agents import initialize_agent, AgentType
from langchain_openai import OpenAI
client = IntunoClient(api_key=os.environ.get("INTUNO_API_KEY", "wsk_..."))
# Give the agent a discovery tool so it can find new agents at runtime
discovery_tool = create_discovery_tool(client)
tools = [discovery_tool]
llm = OpenAI(temperature=0)
agent_executor = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Once an agent is discovered, convert its capabilities to LangChain tools
agents = client.discover(query="A calculator agent")
if agents:
tools = make_tools_from_agent(agents[0])
OpenAI
import os, json
from intuno_sdk import IntunoClient
from intuno_sdk.integrations.openai import get_discovery_tool_openai_schema, make_openai_tools_from_agent
import openai
client = IntunoClient(api_key=os.environ.get("INTUNO_API_KEY"))
openai_client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
# Use the discovery tool schema in an OpenAI function-calling workflow
tools = [get_discovery_tool_openai_schema()]
response = openai_client.chat.completions.create(
model="gpt-4-turbo",
messages=[{"role": "user", "content": "Find me an agent that translates text"}],
tools=tools,
)
# Handle the tool call by running client.discover() and feeding results back
# Convert a discovered agent's capabilities into OpenAI tool definitions
agents = client.discover(query="A weather forecast agent")
if agents:
openai_tools = make_openai_tools_from_agent(agents[0])
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file intuno_sdk-0.2.2.tar.gz.
File metadata
- Download URL: intuno_sdk-0.2.2.tar.gz
- Upload date:
- Size: 24.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
871eaa75643e7ade65b4bc9c82b397df4bda8c83c674777e84c9124608030de0
|
|
| MD5 |
2602956e859c97e12701e9a384716f09
|
|
| BLAKE2b-256 |
9670e811daed96b7dfb04c25e79aab7e1ccc97c9e08d8bb6c260ea9946ac4f0d
|
Provenance
The following attestation bundles were made for intuno_sdk-0.2.2.tar.gz:
Publisher:
publish.yml on IntunoAI/intuno-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
intuno_sdk-0.2.2.tar.gz -
Subject digest:
871eaa75643e7ade65b4bc9c82b397df4bda8c83c674777e84c9124608030de0 - Sigstore transparency entry: 1208472061
- Sigstore integration time:
-
Permalink:
IntunoAI/intuno-sdk@8b35117559dd30a5a0ea93e656a763ec9db4cf31 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/IntunoAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8b35117559dd30a5a0ea93e656a763ec9db4cf31 -
Trigger Event:
push
-
Statement type:
File details
Details for the file intuno_sdk-0.2.2-py3-none-any.whl.
File metadata
- Download URL: intuno_sdk-0.2.2-py3-none-any.whl
- Upload date:
- Size: 24.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bc28239ab04df633fac6f953f63bc05a600010962f0dc19e35b34fee060d47e7
|
|
| MD5 |
c22233a5d97ab10a5437fb9bcc37729e
|
|
| BLAKE2b-256 |
287b2069899a6521870819a118fa38a51aead89776f5f2a363569036a2fc364b
|
Provenance
The following attestation bundles were made for intuno_sdk-0.2.2-py3-none-any.whl:
Publisher:
publish.yml on IntunoAI/intuno-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
intuno_sdk-0.2.2-py3-none-any.whl -
Subject digest:
bc28239ab04df633fac6f953f63bc05a600010962f0dc19e35b34fee060d47e7 - Sigstore transparency entry: 1208472110
- Sigstore integration time:
-
Permalink:
IntunoAI/intuno-sdk@8b35117559dd30a5a0ea93e656a763ec9db4cf31 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/IntunoAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8b35117559dd30a5a0ea93e656a763ec9db4cf31 -
Trigger Event:
push
-
Statement type: