Skip to main content

An Intelligence Operating System.

Project description

PyPI - Version PyPI - Downloads Python Version codecov

Documentation | Discord | PyPI

LION - Language InterOperable Network

An AGentic Intelligence SDK

LionAGI is a robust framework for orchestrating multi-step AI operations with precise control. Bring together multiple models, advanced ReAct reasoning, tool integrations, and custom validations in a single coherent pipeline.

Why LionAGI?

  • Structured: Validate and type all LLM interactions with Pydantic.
  • Expandable: Integrate multiple providers (OpenAI, Anthropic, Perplexity, custom) with minimal friction.
  • Controlled: Use built-in safety checks, concurrency strategies, and advanced multi-step flows like ReAct.
  • Transparent: Debug easily with real-time logging, message introspection, and tool usage tracking.

Installation

uv add lionagi  # recommended to use pyproject and uv for dependency management

pip install lionagi # or install directly

Quick Start

from lionagi import Branch, iModel

# Pick a model
gpt4o = iModel(provider="openai", model="gpt-4o-mini")

# Create a Branch (conversation context)
hunter = Branch(
  system="you are a hilarious dragon hunter who responds in 10 words rhymes.",
  chat_model=gpt4o,
)

# Communicate asynchronously
response = await hunter.communicate("I am a dragon")
print(response)
You claim to be a dragon, oh what a braggin'!

Structured Responses

Use Pydantic to keep outputs structured:

from pydantic import BaseModel

class Joke(BaseModel):
    joke: str

res = await hunter.operate(
    instruction="Tell me a short dragon joke",
    response_format=Joke
)
print(type(res))
print(res.joke)
<class '__main__.Joke'>
With fiery claws, dragons hide their laughter flaws!

ReAct and Tools

LionAGI supports advanced multi-step reasoning with ReAct. Tools let the LLM invoke external actions:

pip install "lionagi[reader]"
from lionagi.tools.types import ReaderTool

# Define model first
gpt4o = iModel(provider="openai", model="gpt-4o-mini")

branch = Branch(chat_model=gpt4o, tools=[ReaderTool])
result = await branch.ReAct(
    instruct={
      "instruction": "Summarize my PDF and compare with relevant papers.",
      "context": {"paper_file_path": "/path/to/paper.pdf"},
    },
    extension_allowed=True,     # allow multi-round expansions
    max_extensions=5,
    verbose=True,      # see step-by-step chain-of-thought
)
print(result)

The LLM can now open the PDF, read in slices, fetch references, and produce a final structured summary.

MCP (Model Context Protocol) Integration

LionAGI supports Anthropic's Model Context Protocol for seamless tool integration:

pip install "lionagi[mcp]"
from lionagi import load_mcp_tools

# Load tools from any MCP server
tools = await load_mcp_tools(".mcp.json", ["search", "memory"])

# Use with ReAct reasoning
branch = Branch(chat_model=gpt4o, tools=tools)
result = await branch.ReAct(
    instruct={"instruction": "Research recent AI developments"},
    tools=["search_exa_search"],
    max_extensions=3
)
  • Dynamic Discovery: Auto-discover and register tools from MCP servers
  • Type Safety: Full Pydantic validation for tool interactions
  • Connection Pooling: Efficient resource management with automatic reuse

Observability & Debugging

  • Inspect messages:
df = branch.to_df()
print(df.tail())
  • Action logs show each tool call, arguments, and outcomes.
  • Verbose ReAct provides chain-of-thought analysis (helpful for debugging multi-step flows).

Example: Multi-Model Orchestration

from lionagi import Branch, iModel

# Define models for multi-model orchestration
gpt4o = iModel(provider="openai", model="gpt-4o-mini")
sonnet = iModel(
  provider="anthropic",
  model="claude-3-5-sonnet-20241022",
  max_tokens=1000,                    # max_tokens is required for anthropic models
)

branch = Branch(chat_model=gpt4o)
analysis = await branch.communicate("Analyze these stats", chat_model=sonnet) # Switch mid-flow

Seamlessly route to different models in the same workflow.

CLI Agent Integration

LionAGI integrates with coding agent CLIs as providers, enabling multi-agent orchestration across models:

Provider CLI Models
claude_code Claude Code sonnet, opus, haiku
codex OpenAI Codex gpt-5.3-codex-spark, gpt-5.4
gemini_code Gemini CLI gemini-3.1-* (unstable)
from lionagi import iModel, Branch

# Use any CLI agent as a model
agent = Branch(chat_model=iModel(provider="claude_code", model="sonnet"))
response = await agent.communicate("Explain the architecture of this codebase")

# Switch providers mid-flow
codex = iModel(provider="codex", model="gpt-5.3-codex-spark")
response2 = await agent.communicate("Compare with your analysis", chat_model=codex)

See the CLI Guide for the li command-line tool that wraps these providers with fan-out orchestration, session persistence, and effort control.

CLI — li

LionAGI ships a command-line tool li for spawning agents and orchestrating multi-agent fan-out patterns directly from your terminal. See the full CLI Guide for details.

# Single agent
li agent claude/sonnet "Explain the observer pattern"
li agent codex/gpt-5.3-codex-spark "Review this function for bugs" --yolo

# Fan-out: orchestrator decomposes task, N workers run in parallel, optional synthesis
li o fanout claude/sonnet "What are the key design patterns in this codebase?" -n 3 --with-synthesis

# Heterogeneous workers + different synthesis model
li o fanout claude/sonnet "Analyze error handling approaches" \
    --workers "claude/sonnet, codex/gpt-5.3-codex-spark" \
    --with-synthesis claude/opus-4-7-high

# Resume any conversation
li agent -r <branch-id> "follow up on your analysis"

optional dependencies

"lionagi[reader]" - Reader tool for any unstructured data and web pages
"lionagi[ollama]" - Ollama model support for local inference
"lionagi[rich]" - Rich output formatting for better console display
"lionagi[schema]" - Convert pydantic schema to make the Model class persistent
"lionagi[postgres]" - Postgres database support for storing and retrieving structured data
"lionagi[graph]" - Graph display for visualizing complex workflows
"lionagi[sqlite]" - SQLite database support for lightweight data storage (also need `postgres` option)

Community & Contributing

We welcome issues, ideas, and pull requests:

  • Discord: Join to chat or get help
  • Issues / PRs: GitHub

Citation

@software{Li_LionAGI_2023,
  author = {Haiyang Li},
  month = {12},
  year = {2023},
  title = {LionAGI: Towards Automated General Intelligence},
  url = {https://github.com/lion-agi/lionagi},
}

🦁 LionAGI

Because real AI orchestration demands more than a single prompt. Try it out and discover the next evolution in structured, multi-model, safe AI.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lionagi-0.21.1.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lionagi-0.21.1-py3-none-any.whl (375.3 kB view details)

Uploaded Python 3

File details

Details for the file lionagi-0.21.1.tar.gz.

File metadata

  • Download URL: lionagi-0.21.1.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for lionagi-0.21.1.tar.gz
Algorithm Hash digest
SHA256 5a73c9a647e1e02efe1d3393d33eae0487b7d4e560341174d0c4b9af8b8d2d86
MD5 8c5bde3f474bcff3bae6a87dc71aaecf
BLAKE2b-256 4d109c4dfb6a5465768a98281acbe01c19e45ac66ddda4cbae4af8caaf99e76a

See more details on using hashes here.

File details

Details for the file lionagi-0.21.1-py3-none-any.whl.

File metadata

  • Download URL: lionagi-0.21.1-py3-none-any.whl
  • Upload date:
  • Size: 375.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for lionagi-0.21.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0c72633b658495b0dfdd9b80b8bcbcf7495f65b53eb414ef04d7027a03341e60
MD5 2828a953e19d83bf2c68fb241ebf4bcd
BLAKE2b-256 64e388e190cecc9fda30902ee81c7f07be45a687f46a5db37a91105a2ba5d6d1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page