PraisonAI is an AI Agents Framework with Self Reflection. PraisonAI application combines PraisonAI Agents, AutoGen, and CrewAI into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customisation, and efficient human-agent collaboration.
Project description
PraisonAI ๐ฆ โ Automate and solve complex challenges with AI agent teams that plan, research, code, and deliver results to Telegram, Discord, and WhatsApp โ running 24/7. A low-code, production-ready multi-agent framework with handoffs, guardrails, memory, RAG, and 100+ LLM providers, built around simplicity, customisation, and effective human-agent collaboration.
Quick Paths:
- ๐ New here? โ Quick Start (1 minute to first agent)
- ๐ฆ Installing? โ Installation
- ๐ Python SDK? โ Python Examples
- ๐ฏ CLI user? โ CLI Quick Reference
- ๐ค Contributing? โ Development
โก Performance
PraisonAI Agents is the fastest AI agent framework for agent instantiation.
| Framework | Avg Time (ฮผs) | Relative |
|---|---|---|
| PraisonAI | 3.77 | 1.00x (fastest) |
| OpenAI Agents SDK | 5.26 | 1.39x |
| Agno | 5.64 | 1.49x |
| PraisonAI (LiteLLM) | 7.56 | 2.00x |
| PydanticAI | 226.94 | 60.16x |
| LangGraph | 4,558.71 | 1,209x |
๐ Quick Start
Get started with PraisonAI in under 1 minute:
# Install
pip install praisonaiagents
# Set API key
export OPENAI_API_KEY=your_key_here
# Create a simple agent
python -c "from praisonaiagents import Agent; Agent(instructions='You are a helpful AI assistant').start('Write a haiku about AI')"
Next Steps: Single Agent Example | Multi Agents | Full Docs
๐ Why PraisonAI?
| Feature | How | |
|---|---|---|
| โก | Fastest framework โ 3.77ฮผs (1,209x faster than LangGraph) | Benchmarks |
| ๐ฆ | Dashboard UI โ chat, agents, memory, knowledge, cron | pip install "praisonai[claw]" |
| ๐ | MCP Protocol โ stdio, HTTP, WebSocket, SSE | tools=MCP("npx ...") |
| ๐ง | Planning Mode โ plan โ execute โ reason | planning=True |
| ๐ | Deep Research โ multi-step autonomous research | Docs |
| ๐ค | External Agents โ orchestrate Claude Code, Gemini CLI, Codex | Docs |
| ๐ | Agent Handoffs โ seamless conversation passing | handoff=True |
| ๐ | 100+ LLM Providers โ OpenAI, Anthropic, Gemini, Ollama, Groq... | Models |
| ๐ก๏ธ | Guardrails โ input/output validation | Docs |
| ๐พ | 20+ Databases โ one-line persistence | db=db("postgresql://...") |
| ๐ | Web Search + Fetch โ native browsing | web_search=True |
| ๐ช | Self Reflection โ agent reviews its own output | Docs |
| ๏ฟฝ | Workflow Patterns โ route, parallel, loop, repeat | Docs |
| ๐ก | Prompt Caching โ reduce latency + cost | prompt_caching=True |
| ๐ง | Memory (zero deps) โ works out of the box | memory=True |
| ๏ฟฝ | Sessions + Auto-Save โ persistent state across restarts | auto_save="my-project" |
| ๐ญ | Thinking Budgets โ control reasoning depth | thinking_budget=1024 |
| ๐ | RAG + Quality-Based RAG โ auto quality scoring retrieval | Docs |
| ๏ฟฝ | Messaging Bots โ Telegram, Discord, Slack, WhatsApp | praisonai bot telegram |
| ๐ | Model Router โ auto-routes to cheapest capable model | Docs |
| ๐ง | Shadow Git Checkpoints โ auto-rollback on failure | Docs |
| ๐ก | A2A Protocol โ agent-to-agent interop | Docs |
| ๐ | Context Compaction โ never hit token limits | Docs |
| ๐ก | Telemetry โ OpenTelemetry traces, spans, metrics | Docs |
| ๐ | Policy Engine โ declarative agent behavior control | Docs |
| ๐ | Background Tasks โ fire-and-forget agents | Docs |
| ๐ | Doom Loop Detection โ auto-recovery from stuck agents | Docs |
| ๐ธ๏ธ | Graph Memory โ Neo4j-style relationship tracking | Docs |
| ๐๏ธ | Sandbox Execution โ isolated code execution | Docs |
| ๐ฅ๏ธ | Bot Gateway โ multi-agent routing across channels | Docs |
๐ฆ Installation
Python SDK
Lightweight package dedicated for coding:
pip install praisonaiagents
For the full framework with CLI support:
pip install praisonai
๐ฆ Full Dashboard with bots, memory, knowledge, and gateway:
pip install "praisonai[claw]"
JavaScript SDK
npm install praisonai
Environment Variables
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes* | OpenAI API key |
ANTHROPIC_API_KEY |
No | Anthropic Claude API key |
GOOGLE_API_KEY |
No | Google Gemini API key |
GROQ_API_KEY |
No | Groq API key |
OPENAI_BASE_URL |
No | Custom API endpoint (for Ollama, Groq, etc.) |
*At least one LLM provider API key is required.
# Set your API key
export OPENAI_API_KEY=your_key_here
# For Ollama (local models)
export OPENAI_BASE_URL=http://localhost:11434/v1
# For Groq
export OPENAI_API_KEY=your_groq_key
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
โจ Key Features
๐ค Core Agents
| Feature | Code | Docs |
|---|---|---|
| Single Agent | Example | ๐ |
| Multi Agents | Example | ๐ |
| Auto Agents | Example | ๐ |
| Self Reflection AI Agents | Example | ๐ |
| Reasoning AI Agents | Example | ๐ |
| Multi Modal AI Agents | Example | ๐ |
๐ Workflows
| Feature | Code | Docs |
|---|---|---|
| Simple Workflow | Example | ๐ |
| Workflow with Agents | Example | ๐ |
Agentic Routing (route()) |
Example | ๐ |
Parallel Execution (parallel()) |
Example | ๐ |
Loop over List/CSV (loop()) |
Example | ๐ |
Evaluator-Optimizer (repeat()) |
Example | ๐ |
| Conditional Steps | Example | ๐ |
| Workflow Branching | Example | ๐ |
| Workflow Early Stop | Example | ๐ |
| Workflow Checkpoints | Example | ๐ |
๐ป Code & Development
| Feature | Code | Docs |
|---|---|---|
| Code Interpreter Agents | Example | ๐ |
| AI Code Editing Tools | Example | ๐ |
| External Agents (All) | Example | ๐ |
| Claude Code CLI | Example | ๐ |
| Gemini CLI | Example | ๐ |
| Codex CLI | Example | ๐ |
| Cursor CLI | Example | ๐ |
๐ง Memory & Knowledge
| Feature | Code | Docs |
|---|---|---|
| Memory (Short & Long Term) | Example | ๐ |
| File-Based Memory | Example | ๐ |
| Claude Memory Tool | Example | ๐ |
| Add Custom Knowledge | Example | ๐ |
| RAG Agents | Example | ๐ |
| Chat with PDF Agents | Example | ๐ |
| Data Readers (PDF, DOCX, etc.) | CLI | ๐ |
| Vector Store Selection | CLI | ๐ |
| Retrieval Strategies | CLI | ๐ |
| Rerankers | CLI | ๐ |
| Index Types (Vector/Keyword/Hybrid) | CLI | ๐ |
| Query Engines (Sub-Question, etc.) | CLI | ๐ |
๐ฌ Research & Intelligence
| Feature | Code | Docs |
|---|---|---|
| Deep Research Agents | Example | ๐ |
| Query Rewriter Agent | Example | ๐ |
| Native Web Search | Example | ๐ |
| Built-in Search Tools | Example | ๐ |
| Unified Web Search | Example | ๐ |
| Web Fetch (Anthropic) | Example | ๐ |
๐ Planning & Execution
| Feature | Code | Docs |
|---|---|---|
| Planning Mode | Example | ๐ |
| Planning Tools | Example | ๐ |
| Planning Reasoning | Example | ๐ |
| Prompt Chaining | Example | ๐ |
| Evaluator Optimiser | Example | ๐ |
| Orchestrator Workers | Example | ๐ |
๐ฅ Specialized Agents
| Feature | Code | Docs |
|---|---|---|
| Data Analyst Agent | Example | ๐ |
| Finance Agent | Example | ๐ |
| Shopping Agent | Example | ๐ |
| Recommendation Agent | Example | ๐ |
| Wikipedia Agent | Example | ๐ |
| Programming Agent | Example | ๐ |
| Math Agents | Example | ๐ |
| Markdown Agent | Example | ๐ |
| Prompt Expander Agent | Example | ๐ |
๐จ Media & Multimodal
| Feature | Code | Docs |
|---|---|---|
| Image Generation Agent | Example | ๐ |
| Image to Text Agent | Example | ๐ |
| Video Agent | Example | ๐ |
| Camera Integration | Example | ๐ |
๐ Protocols & Integration
| Feature | Code | Docs |
|---|---|---|
| MCP Transports | Example | ๐ |
| WebSocket MCP | Example | ๐ |
| MCP Security | Example | ๐ |
| MCP Resumability | Example | ๐ |
| MCP Config Management | Docs | ๐ |
| LangChain Integrated Agents | Example | ๐ |
๐ก๏ธ Safety & Control
| Feature | Code | Docs |
|---|---|---|
| Guardrails | Example | ๐ |
| Human Approval | Example | ๐ |
| Rules & Instructions | Docs | ๐ |
โ๏ธ Advanced Features
| Feature | Code | Docs |
|---|---|---|
| Async & Parallel Processing | Example | ๐ |
| Parallelisation | Example | ๐ |
| Repetitive Agents | Example | ๐ |
| Agent Handoffs | Example | ๐ |
| Stateful Agents | Example | ๐ |
| Autonomous Workflow | Example | ๐ |
| Structured Output Agents | Example | ๐ |
| Model Router | Example | ๐ |
| Prompt Caching | Example | ๐ |
| Fast Context | Example | ๐ |
๐ ๏ธ Tools & Configuration
| Feature | Code | Docs |
|---|---|---|
| 100+ Custom Tools | Example | ๐ |
| YAML Configuration | Example | ๐ |
| 100+ LLM Support | Example | ๐ |
| Callback Agents | Example | ๐ |
| Hooks | Example | ๐ |
| Middleware System | Example | ๐ |
| Configurable Model | Example | ๐ |
| Rate Limiter | Example | ๐ |
| Injected Tool State | Example | ๐ |
| Shadow Git Checkpoints | Example | ๐ |
| Background Tasks | Example | ๐ |
| Policy Engine | Example | ๐ |
| Thinking Budgets | Example | ๐ |
| Output Styles | Example | ๐ |
| Context Compaction | Example | ๐ |
๐ Monitoring & Management
| Feature | Code | Docs |
|---|---|---|
| Sessions Management | Example | ๐ |
| Auto-Save Sessions | Docs | ๐ |
| History in Context | Docs | ๐ |
| Telemetry | Example | ๐ |
| Project Docs (.praison/docs/) | Docs | ๐ |
| AI Commit Messages | Docs | ๐ |
| @Mentions in Prompts | Docs | ๐ |
๐ฅ๏ธ CLI Features
| Feature | Code | Docs |
|---|---|---|
| Slash Commands | Example | ๐ |
| Autonomy Modes | Example | ๐ |
| Cost Tracking | Example | ๐ |
| Repository Map | Example | ๐ |
| Interactive TUI | Example | ๐ |
| Git Integration | Example | ๐ |
| Sandbox Execution | Example | ๐ |
| CLI Compare | Example | ๐ |
| Profile/Benchmark | Docs | ๐ |
| Auto Mode | Docs | ๐ |
| Init | Docs | ๐ |
| File Input | Docs | ๐ |
| Final Agent | Docs | ๐ |
| Max Tokens | Docs | ๐ |
๐งช Evaluation
| Feature | Code | Docs |
|---|---|---|
| Accuracy Evaluation | Example | ๐ |
| Performance Evaluation | Example | ๐ |
| Reliability Evaluation | Example | ๐ |
| Criteria Evaluation | Example | ๐ |
๐ Supported Providers
PraisonAI supports 100+ LLM providers through seamless integration:
View all 24 providers
| Provider | Example |
|---|---|
| OpenAI | Example |
| Anthropic | Example |
| Google Gemini | Example |
| Ollama | Example |
| Groq | Example |
| DeepSeek | Example |
| xAI Grok | Example |
| Mistral | Example |
| Cohere | Example |
| Perplexity | Example |
| Fireworks | Example |
| Together AI | Example |
| OpenRouter | Example |
| HuggingFace | Example |
| Azure OpenAI | Example |
| AWS Bedrock | Example |
| Google Vertex | Example |
| Databricks | Example |
| Cloudflare | Example |
| AI21 | Example |
| Replicate | Example |
| SageMaker | Example |
| Moonshot | Example |
| vLLM | Example |
๐ Using Python Code
1. Single Agent
from praisonaiagents import Agent
agent = Agent(instructions="Your are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")
2. Multi Agents
from praisonaiagents import Agent, Agents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = Agents(agents=[research_agent, summarise_agent])
agents.start()
3. MCP (Model Context Protocol)
from praisonaiagents import Agent, MCP
# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))
# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
# With environment variables
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)
๐ Full MCP docs โ stdio, HTTP, WebSocket, SSE transports
4. Custom Tools
from praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")
๐ Full tools docs โ BaseTool, tool packages, 100+ built-in tools
5. Persistence (Databases)
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
agent.chat("Hello!") # Auto-persists messages, runs, traces
๐ Full persistence docs โ PostgreSQL, MySQL, SQLite, MongoDB, Redis, and 20+ more
๐ฏ CLI Quick Reference
| Category | Commands |
|---|---|
| Execution | praisonai, --auto, --interactive, --chat |
| Research | research, --query-rewrite, --deep-research |
| Planning | --planning, --planning-tools, --planning-reasoning |
| Workflows | workflow run, workflow list, workflow auto |
| Memory | memory show, memory add, memory search, memory clear |
| Knowledge | knowledge add, knowledge query, knowledge list |
| Sessions | session list, session resume, session delete |
| Tools | tools list, tools info, tools search |
| MCP | mcp list, mcp create, mcp enable |
| Development | commit, docs, checkpoint, hooks |
| Scheduling | schedule start, schedule list, schedule stop |
๐ Full CLI reference
๐ป Using JavaScript Code
npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');
โญ Star History
๐ Video Tutorials
Learn PraisonAI through our comprehensive video series:
View all 22 video tutorials
๐ฅ Contributing
We welcome contributions from the community! Here's how you can contribute:
- Fork on GitHub - Use the "Fork" button on the repository page
- Clone your fork -
git clone https://github.com/yourusername/praisonAI.git - Create a branch -
git checkout -b new-feature - Make changes and commit -
git commit -am "Add some feature" - Push to your fork -
git push origin new-feature - Submit a pull request - Via GitHub's web interface
- Await feedback - From project maintainers
๐ง Development
Using uv
# Install uv if you haven't already
pip install uv
# Install from requirements
uv pip install -r pyproject.toml
# Install with extras
uv pip install -r pyproject.toml --extra code
uv pip install -r pyproject.toml --extra "crewai,autogen"
Bump and Release
# From project root - bumps version and releases in one command
python src/praisonai/scripts/bump_and_release.py 2.2.99
# With praisonaiagents dependency
python src/praisonai/scripts/bump_and_release.py 2.2.99 --agents 0.0.169
# Then publish
cd src/praisonai && uv publish
โ FAQ & Troubleshooting
ModuleNotFoundError: No module named 'praisonaiagents'
Install the package:
pip install praisonaiagents
API key not found / Authentication error
Ensure your API key is set:
export OPENAI_API_KEY=your_key_here
For other providers, see Environment Variables.
How do I use a local model (Ollama)?
# Start Ollama server first
ollama serve
# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1
See Models docs for more details.
How do I persist conversations to a database?
Use the db parameter:
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
See Persistence docs for supported databases.
How do I enable agent memory?
from praisonaiagents import Agent
agent = Agent(
name="Assistant",
memory=True, # Enables file-based memory (no extra deps!)
user_id="user123"
)
See Memory docs for more options.
How do I run multiple agents together?
from praisonaiagents import Agent, Agents
agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = Agents(agents=[agent1, agent2])
agents.start()
See Agents docs for more examples.
How do I use MCP tools?
from praisonaiagents import Agent, MCP
agent = Agent(
tools=MCP("npx @modelcontextprotocol/server-memory")
)
See MCP docs for all transport options.
Getting Help
- ๐ Full Documentation
- ๐ Report Issues
- ๐ฌ Discussions
Made with โค๏ธ by the PraisonAI Team
Documentation โข GitHub โข Issues
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file praisonai-4.5.43.tar.gz.
File metadata
- Download URL: praisonai-4.5.43.tar.gz
- Upload date:
- Size: 1.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7bbb721f485974a243d90e9179602854e98c65f2b868889b78dc40c508e0eb95
|
|
| MD5 |
379aa74b5b2f9d0f30129a0645f9b31c
|
|
| BLAKE2b-256 |
1b9627753d9bea293dfe16efd927433323a348210ede171bb2669839164d5f54
|
File details
Details for the file praisonai-4.5.43-py3-none-any.whl.
File metadata
- Download URL: praisonai-4.5.43-py3-none-any.whl
- Upload date:
- Size: 2.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
519137b0afda35d54ae8aaa240abc543e2b9537d6320853ca8edd39d87c18f0a
|
|
| MD5 |
6bd842c260e374110684a1547dda2dc9
|
|
| BLAKE2b-256 |
cc7802f0fdd14771fe193d5d2fd047eab0e8e3296b1b70ccc1720c1bdb5bfdfa
|