An Intelligence Operating System.
Project description
Documentation | Discord | PyPI
LION - Language InterOperable Network
An AGentic Intelligence SDK
LionAGI is a robust framework for orchestrating multi-step AI operations with precise control. Bring together multiple models, advanced ReAct reasoning, tool integrations, and custom validations in a single coherent pipeline.
Why LionAGI?
- Structured: Validate and type all LLM interactions with Pydantic.
- Expandable: Integrate multiple providers (OpenAI, Anthropic, Perplexity, custom) with minimal friction.
- Controlled: Use built-in safety checks, concurrency strategies, and advanced multi-step flows like ReAct.
- Transparent: Debug easily with real-time logging, message introspection, and tool usage tracking.
Installation
uv add lionagi # recommended to use pyproject and uv for dependency management
pip install lionagi # or install directly
Quick Start
from lionagi import Branch, iModel
# Pick a model
gpt4o = iModel(provider="openai", model="gpt-4o-mini")
# Create a Branch (conversation context)
hunter = Branch(
system="you are a hilarious dragon hunter who responds in 10 words rhymes.",
chat_model=gpt4o,
)
# Communicate asynchronously
response = await hunter.communicate("I am a dragon")
print(response)
You claim to be a dragon, oh what a braggin'!
Structured Responses
Use Pydantic to keep outputs structured:
from pydantic import BaseModel
class Joke(BaseModel):
joke: str
res = await hunter.operate(
instruction="Tell me a short dragon joke",
response_format=Joke
)
print(type(res))
print(res.joke)
<class '__main__.Joke'>
With fiery claws, dragons hide their laughter flaws!
ReAct and Tools
LionAGI supports advanced multi-step reasoning with ReAct. Tools let the LLM invoke external actions:
pip install "lionagi[reader]"
from lionagi.tools.types import ReaderTool
# Define model first
gpt4o = iModel(provider="openai", model="gpt-4o-mini")
branch = Branch(chat_model=gpt4o, tools=[ReaderTool])
result = await branch.ReAct(
instruct={
"instruction": "Summarize my PDF and compare with relevant papers.",
"context": {"paper_file_path": "/path/to/paper.pdf"},
},
extension_allowed=True, # allow multi-round expansions
max_extensions=5,
verbose=True, # see step-by-step chain-of-thought
)
print(result)
The LLM can now open the PDF, read in slices, fetch references, and produce a final structured summary.
MCP (Model Context Protocol) Integration
LionAGI supports Anthropic's Model Context Protocol for seamless tool integration:
pip install "lionagi[mcp]"
from lionagi import load_mcp_tools
# Load tools from any MCP server
tools = await load_mcp_tools(".mcp.json", ["search", "memory"])
# Use with ReAct reasoning
branch = Branch(chat_model=gpt4o, tools=tools)
result = await branch.ReAct(
instruct={"instruction": "Research recent AI developments"},
tools=["search_exa_search"],
max_extensions=3
)
- Dynamic Discovery: Auto-discover and register tools from MCP servers
- Type Safety: Full Pydantic validation for tool interactions
- Connection Pooling: Efficient resource management with automatic reuse
Observability & Debugging
- Inspect messages:
df = branch.to_df()
print(df.tail())
- Action logs show each tool call, arguments, and outcomes.
- Verbose ReAct provides chain-of-thought analysis (helpful for debugging multi-step flows).
Example: Multi-Model Orchestration
from lionagi import Branch, iModel
# Define models for multi-model orchestration
gpt4o = iModel(provider="openai", model="gpt-4o-mini")
sonnet = iModel(
provider="anthropic",
model="claude-3-5-sonnet-20241022",
max_tokens=1000, # max_tokens is required for anthropic models
)
branch = Branch(chat_model=gpt4o)
analysis = await branch.communicate("Analyze these stats", chat_model=sonnet) # Switch mid-flow
Seamlessly route to different models in the same workflow.
CLI Agent Integration
LionAGI integrates with coding agent CLIs as providers, enabling multi-agent orchestration across models:
| Provider | CLI | Models |
|---|---|---|
claude_code |
Claude Code | sonnet, opus, haiku |
codex |
OpenAI Codex | gpt-5.3-codex-spark, gpt-5.4 |
gemini_code |
Gemini CLI | gemini-3.1-* (unstable) |
from lionagi import iModel, Branch
# Use any CLI agent as a model
agent = Branch(chat_model=iModel(provider="claude_code", model="sonnet"))
response = await agent.communicate("Explain the architecture of this codebase")
# Switch providers mid-flow
codex = iModel(provider="codex", model="gpt-5.3-codex-spark")
response2 = await agent.communicate("Compare with your analysis", chat_model=codex)
See the CLI Guide for the li command-line tool that wraps these providers with fan-out orchestration, session persistence, and effort control.
CLI — li
LionAGI ships a command-line tool li for spawning agents, orchestrating multi-agent fan-out, and team messaging. See the full CLI Guide for details.
# Single agent
li agent claude/sonnet "Explain the observer pattern"
li agent codex/gpt-5.3-codex-spark "Review this function for bugs" --yolo
# Fan-out: orchestrator decomposes task, N workers run in parallel, optional synthesis
li o fanout claude/sonnet "What are the key design patterns in this codebase?" -n 3 --with-synthesis
# Fan-out with team tracking: workers get named identities, results posted as messages
li o fanout claude/sonnet "Improve test coverage for this project" \
-n 5 --yolo --team-mode "coverage-boost" --with-synthesis
# Team messaging: inbox-style coordination between agents
li team create "research" -m "analyst,writer,reviewer"
li team send "analyze auth middleware" -t <team-id> --to all
li team receive -t <team-id> --as writer
# Resume any conversation
li agent -r <branch-id> "follow up on your analysis"
optional dependencies
"lionagi[reader]" - Reader tool for any unstructured data and web pages
"lionagi[ollama]" - Ollama model support for local inference
"lionagi[rich]" - Rich output formatting for better console display
"lionagi[schema]" - Convert pydantic schema to make the Model class persistent
"lionagi[postgres]" - Postgres database support for storing and retrieving structured data
"lionagi[graph]" - Graph display for visualizing complex workflows
"lionagi[sqlite]" - SQLite database support for lightweight data storage (also need `postgres` option)
Community & Contributing
We welcome issues, ideas, and pull requests:
- Discord: Join to chat or get help
- Issues / PRs: GitHub
Citation
@software{Li_LionAGI_2023,
author = {Haiyang Li},
month = {12},
year = {2023},
title = {LionAGI: Towards Automated General Intelligence},
url = {https://github.com/lion-agi/lionagi},
}
🦁 LionAGI
Because real AI orchestration demands more than a single prompt. Try it out and discover the next evolution in structured, multi-model, safe AI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lionagi-0.22.0.tar.gz.
File metadata
- Download URL: lionagi-0.22.0.tar.gz
- Upload date:
- Size: 1.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.20
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10ee91bc324b5154c7785753afc56f5a6fede30a1cbe826d66d40e8813b396f5
|
|
| MD5 |
aef71aa3ed738bf60259fa706898ed71
|
|
| BLAKE2b-256 |
f9d2e961bb8a62d608aa845e34c2a1f5c3b69a29837cfafd1a55ff59bf7e8e3b
|
File details
Details for the file lionagi-0.22.0-py3-none-any.whl.
File metadata
- Download URL: lionagi-0.22.0-py3-none-any.whl
- Upload date:
- Size: 392.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.20
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
62f0bb40b4ed7824bcc0c17a1b614d10c163c6bc989be749658a62bfcf9816ed
|
|
| MD5 |
08178d4e4e36990be24d2831547cddda
|
|
| BLAKE2b-256 |
229f5b969d81bf4f7405727b1f0d3f21216364794a07ad37357aa45e45df6184
|