Skip to main content

PraisonAI is an AI Agents Framework with Self Reflection. PraisonAI application combines PraisonAI Agents, AutoGen, and CrewAI into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customisation, and efficient human-agent collaboration.

Project description

PraisonAI Logo

Total Downloads Latest Stable Version License MCP Registry

PraisonAI ๐Ÿฆž

MervinPraison%2FPraisonAI | Trendshift

PraisonAI ๐Ÿฆž โ€” Automate and solve complex challenges with AI agent teams that plan, research, code, and deliver results to Telegram, Discord, and WhatsApp โ€” running 24/7. A low-code, production-ready multi-agent framework with handoffs, guardrails, memory, RAG, and 100+ LLM providers, built around simplicity, customisation, and effective human-agent collaboration.


Quick Paths:


โšก Performance

PraisonAI Agents is the fastest AI agent framework for agent instantiation.

Framework Avg Time (ฮผs) Relative
PraisonAI 3.77 1.00x (fastest)
OpenAI Agents SDK 5.26 1.39x
Agno 5.64 1.49x
PraisonAI (LiteLLM) 7.56 2.00x
PydanticAI 226.94 60.16x
LangGraph 4,558.71 1,209x

๐Ÿš€ Quick Start

Get started with PraisonAI in under 1 minute:

# Install
pip install praisonaiagents

# Set API key
export OPENAI_API_KEY=your_key_here

# Create a simple agent
python -c "from praisonaiagents import Agent; Agent(instructions='You are a helpful AI assistant').start('Write a haiku about AI')"

Next Steps: Single Agent Example | Multi Agents | Full Docs


๐ŸŒŸ Why PraisonAI?

Feature How
โšก Fastest framework โ€” 3.77ฮผs (1,209x faster than LangGraph) Benchmarks
๐Ÿฆž Dashboard UI โ€” chat, agents, memory, knowledge, cron pip install "praisonai[claw]"
๐Ÿ”Œ MCP Protocol โ€” stdio, HTTP, WebSocket, SSE tools=MCP("npx ...")
๐Ÿง  Planning Mode โ€” plan โ†’ execute โ†’ reason planning=True
๐Ÿ” Deep Research โ€” multi-step autonomous research Docs
๐Ÿค– External Agents โ€” orchestrate Claude Code, Gemini CLI, Codex Docs
๐Ÿ”„ Agent Handoffs โ€” seamless conversation passing handoff=True
๐ŸŒ 100+ LLM Providers โ€” OpenAI, Anthropic, Gemini, Ollama, Groq... Models
๐Ÿ›ก๏ธ Guardrails โ€” input/output validation Docs
๐Ÿ’พ 20+ Databases โ€” one-line persistence db=db("postgresql://...")
๐Ÿ”Ž Web Search + Fetch โ€” native browsing web_search=True
๐Ÿชž Self Reflection โ€” agent reviews its own output Docs
๏ฟฝ Workflow Patterns โ€” route, parallel, loop, repeat Docs
๐Ÿ’ก Prompt Caching โ€” reduce latency + cost prompt_caching=True
๐Ÿง  Memory (zero deps) โ€” works out of the box memory=True
๏ฟฝ Sessions + Auto-Save โ€” persistent state across restarts auto_save="my-project"
๐Ÿ’ญ Thinking Budgets โ€” control reasoning depth thinking_budget=1024
๐Ÿ“š RAG + Quality-Based RAG โ€” auto quality scoring retrieval Docs
๏ฟฝ Messaging Bots โ€” Telegram, Discord, Slack, WhatsApp praisonai bot telegram
๐Ÿ“Š Model Router โ€” auto-routes to cheapest capable model Docs
๐ŸงŠ Shadow Git Checkpoints โ€” auto-rollback on failure Docs
๐Ÿ“ก A2A Protocol โ€” agent-to-agent interop Docs
๐Ÿ“ Context Compaction โ€” never hit token limits Docs
๐Ÿ“ก Telemetry โ€” OpenTelemetry traces, spans, metrics Docs
๐Ÿ“œ Policy Engine โ€” declarative agent behavior control Docs
๐Ÿ”„ Background Tasks โ€” fire-and-forget agents Docs
๐Ÿ” Doom Loop Detection โ€” auto-recovery from stuck agents Docs
๐Ÿ•ธ๏ธ Graph Memory โ€” Neo4j-style relationship tracking Docs
๐Ÿ–๏ธ Sandbox Execution โ€” isolated code execution Docs
๐Ÿ–ฅ๏ธ Bot Gateway โ€” multi-agent routing across channels Docs

๐Ÿ“ฆ Installation

Python SDK

Lightweight package dedicated for coding:

pip install praisonaiagents

For the full framework with CLI support:

pip install praisonai

๐Ÿฆž Full Dashboard with bots, memory, knowledge, and gateway:

pip install "praisonai[claw]"

JavaScript SDK

npm install praisonai

Environment Variables

Variable Required Description
OPENAI_API_KEY Yes* OpenAI API key
ANTHROPIC_API_KEY No Anthropic Claude API key
GOOGLE_API_KEY No Google Gemini API key
GROQ_API_KEY No Groq API key
OPENAI_BASE_URL No Custom API endpoint (for Ollama, Groq, etc.)

*At least one LLM provider API key is required.

# Set your API key
export OPENAI_API_KEY=your_key_here

# For Ollama (local models)
export OPENAI_BASE_URL=http://localhost:11434/v1

# For Groq
export OPENAI_API_KEY=your_groq_key
export OPENAI_BASE_URL=https://api.groq.com/openai/v1

โœจ Key Features

๐Ÿค– Core Agents
Feature Code Docs
Single Agent Example ๐Ÿ“–
Multi Agents Example ๐Ÿ“–
Auto Agents Example ๐Ÿ“–
Self Reflection AI Agents Example ๐Ÿ“–
Reasoning AI Agents Example ๐Ÿ“–
Multi Modal AI Agents Example ๐Ÿ“–
๐Ÿ”„ Workflows
Feature Code Docs
Simple Workflow Example ๐Ÿ“–
Workflow with Agents Example ๐Ÿ“–
Agentic Routing (route()) Example ๐Ÿ“–
Parallel Execution (parallel()) Example ๐Ÿ“–
Loop over List/CSV (loop()) Example ๐Ÿ“–
Evaluator-Optimizer (repeat()) Example ๐Ÿ“–
Conditional Steps Example ๐Ÿ“–
Workflow Branching Example ๐Ÿ“–
Workflow Early Stop Example ๐Ÿ“–
Workflow Checkpoints Example ๐Ÿ“–
๐Ÿ’ป Code & Development
Feature Code Docs
Code Interpreter Agents Example ๐Ÿ“–
AI Code Editing Tools Example ๐Ÿ“–
External Agents (All) Example ๐Ÿ“–
Claude Code CLI Example ๐Ÿ“–
Gemini CLI Example ๐Ÿ“–
Codex CLI Example ๐Ÿ“–
Cursor CLI Example ๐Ÿ“–
๐Ÿง  Memory & Knowledge
Feature Code Docs
Memory (Short & Long Term) Example ๐Ÿ“–
File-Based Memory Example ๐Ÿ“–
Claude Memory Tool Example ๐Ÿ“–
Add Custom Knowledge Example ๐Ÿ“–
RAG Agents Example ๐Ÿ“–
Chat with PDF Agents Example ๐Ÿ“–
Data Readers (PDF, DOCX, etc.) CLI ๐Ÿ“–
Vector Store Selection CLI ๐Ÿ“–
Retrieval Strategies CLI ๐Ÿ“–
Rerankers CLI ๐Ÿ“–
Index Types (Vector/Keyword/Hybrid) CLI ๐Ÿ“–
Query Engines (Sub-Question, etc.) CLI ๐Ÿ“–
๐Ÿ”ฌ Research & Intelligence
Feature Code Docs
Deep Research Agents Example ๐Ÿ“–
Query Rewriter Agent Example ๐Ÿ“–
Native Web Search Example ๐Ÿ“–
Built-in Search Tools Example ๐Ÿ“–
Unified Web Search Example ๐Ÿ“–
Web Fetch (Anthropic) Example ๐Ÿ“–
๐Ÿ“‹ Planning & Execution
Feature Code Docs
Planning Mode Example ๐Ÿ“–
Planning Tools Example ๐Ÿ“–
Planning Reasoning Example ๐Ÿ“–
Prompt Chaining Example ๐Ÿ“–
Evaluator Optimiser Example ๐Ÿ“–
Orchestrator Workers Example ๐Ÿ“–
๐Ÿ‘ฅ Specialized Agents
Feature Code Docs
Data Analyst Agent Example ๐Ÿ“–
Finance Agent Example ๐Ÿ“–
Shopping Agent Example ๐Ÿ“–
Recommendation Agent Example ๐Ÿ“–
Wikipedia Agent Example ๐Ÿ“–
Programming Agent Example ๐Ÿ“–
Math Agents Example ๐Ÿ“–
Markdown Agent Example ๐Ÿ“–
Prompt Expander Agent Example ๐Ÿ“–
๐ŸŽจ Media & Multimodal
Feature Code Docs
Image Generation Agent Example ๐Ÿ“–
Image to Text Agent Example ๐Ÿ“–
Video Agent Example ๐Ÿ“–
Camera Integration Example ๐Ÿ“–
๐Ÿ”Œ Protocols & Integration
Feature Code Docs
MCP Transports Example ๐Ÿ“–
WebSocket MCP Example ๐Ÿ“–
MCP Security Example ๐Ÿ“–
MCP Resumability Example ๐Ÿ“–
MCP Config Management Docs ๐Ÿ“–
LangChain Integrated Agents Example ๐Ÿ“–
๐Ÿ›ก๏ธ Safety & Control
Feature Code Docs
Guardrails Example ๐Ÿ“–
Human Approval Example ๐Ÿ“–
Rules & Instructions Docs ๐Ÿ“–
โš™๏ธ Advanced Features
Feature Code Docs
Async & Parallel Processing Example ๐Ÿ“–
Parallelisation Example ๐Ÿ“–
Repetitive Agents Example ๐Ÿ“–
Agent Handoffs Example ๐Ÿ“–
Stateful Agents Example ๐Ÿ“–
Autonomous Workflow Example ๐Ÿ“–
Structured Output Agents Example ๐Ÿ“–
Model Router Example ๐Ÿ“–
Prompt Caching Example ๐Ÿ“–
Fast Context Example ๐Ÿ“–
๐Ÿ› ๏ธ Tools & Configuration
Feature Code Docs
100+ Custom Tools Example ๐Ÿ“–
YAML Configuration Example ๐Ÿ“–
100+ LLM Support Example ๐Ÿ“–
Callback Agents Example ๐Ÿ“–
Hooks Example ๐Ÿ“–
Middleware System Example ๐Ÿ“–
Configurable Model Example ๐Ÿ“–
Rate Limiter Example ๐Ÿ“–
Injected Tool State Example ๐Ÿ“–
Shadow Git Checkpoints Example ๐Ÿ“–
Background Tasks Example ๐Ÿ“–
Policy Engine Example ๐Ÿ“–
Thinking Budgets Example ๐Ÿ“–
Output Styles Example ๐Ÿ“–
Context Compaction Example ๐Ÿ“–
๐Ÿ“Š Monitoring & Management
Feature Code Docs
Sessions Management Example ๐Ÿ“–
Auto-Save Sessions Docs ๐Ÿ“–
History in Context Docs ๐Ÿ“–
Telemetry Example ๐Ÿ“–
Project Docs (.praison/docs/) Docs ๐Ÿ“–
AI Commit Messages Docs ๐Ÿ“–
@Mentions in Prompts Docs ๐Ÿ“–
๐Ÿ–ฅ๏ธ CLI Features
Feature Code Docs
Slash Commands Example ๐Ÿ“–
Autonomy Modes Example ๐Ÿ“–
Cost Tracking Example ๐Ÿ“–
Repository Map Example ๐Ÿ“–
Interactive TUI Example ๐Ÿ“–
Git Integration Example ๐Ÿ“–
Sandbox Execution Example ๐Ÿ“–
CLI Compare Example ๐Ÿ“–
Profile/Benchmark Docs ๐Ÿ“–
Auto Mode Docs ๐Ÿ“–
Init Docs ๐Ÿ“–
File Input Docs ๐Ÿ“–
Final Agent Docs ๐Ÿ“–
Max Tokens Docs ๐Ÿ“–
๐Ÿงช Evaluation
Feature Code Docs
Accuracy Evaluation Example ๐Ÿ“–
Performance Evaluation Example ๐Ÿ“–
Reliability Evaluation Example ๐Ÿ“–
Criteria Evaluation Example ๐Ÿ“–
๐ŸŽฏ Agent Skills
Feature Code Docs
Skills Management Example ๐Ÿ“–
Custom Skills Example ๐Ÿ“–
โฐ 24/7 Scheduling
Feature Code Docs
Agent Scheduler Example ๐Ÿ“–

๐ŸŒ Supported Providers

PraisonAI supports 100+ LLM providers through seamless integration:

View all 24 providers
Provider Example
OpenAI Example
Anthropic Example
Google Gemini Example
Ollama Example
Groq Example
DeepSeek Example
xAI Grok Example
Mistral Example
Cohere Example
Perplexity Example
Fireworks Example
Together AI Example
OpenRouter Example
HuggingFace Example
Azure OpenAI Example
AWS Bedrock Example
Google Vertex Example
Databricks Example
Cloudflare Example
AI21 Example
Replicate Example
SageMaker Example
Moonshot Example
vLLM Example

๐Ÿ“˜ Using Python Code

1. Single Agent

from praisonaiagents import Agent
agent = Agent(instructions="Your are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")

2. Multi Agents

from praisonaiagents import Agent, Agents

research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = Agents(agents=[research_agent, summarise_agent])
agents.start()

3. MCP (Model Context Protocol)

from praisonaiagents import Agent, MCP

# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))

# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))

# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))

# With environment variables
agent = Agent(
    tools=MCP(
        command="npx",
        args=["-y", "@modelcontextprotocol/server-brave-search"],
        env={"BRAVE_API_KEY": "your-key"}
    )
)

๐Ÿ“– Full MCP docs โ€” stdio, HTTP, WebSocket, SSE transports

4. Custom Tools

from praisonaiagents import Agent, tool

@tool
def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

@tool
def calculate(expression: str) -> float:
    """Evaluate a math expression."""
    return eval(expression)

agent = Agent(
    instructions="You are a helpful assistant",
    tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")

๐Ÿ“– Full tools docs โ€” BaseTool, tool packages, 100+ built-in tools

5. Persistence (Databases)

from praisonaiagents import Agent, db

agent = Agent(
    name="Assistant",
    db=db(database_url="postgresql://localhost/mydb"),
    session_id="my-session"
)
agent.chat("Hello!")  # Auto-persists messages, runs, traces

๐Ÿ“– Full persistence docs โ€” PostgreSQL, MySQL, SQLite, MongoDB, Redis, and 20+ more


๐ŸŽฏ CLI Quick Reference

Category Commands
Execution praisonai, --auto, --interactive, --chat
Research research, --query-rewrite, --deep-research
Planning --planning, --planning-tools, --planning-reasoning
Workflows workflow run, workflow list, workflow auto
Memory memory show, memory add, memory search, memory clear
Knowledge knowledge add, knowledge query, knowledge list
Sessions session list, session resume, session delete
Tools tools list, tools info, tools search
MCP mcp list, mcp create, mcp enable
Development commit, docs, checkpoint, hooks
Scheduling schedule start, schedule list, schedule stop

๐Ÿ“– Full CLI reference


๐Ÿ’ป Using JavaScript Code

npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');

PraisonAI CLI Demo


โญ Star History

Star History Chart


๐ŸŽ“ Video Tutorials

Learn PraisonAI through our comprehensive video series:

View all 22 video tutorials
Topic Video
AI Agents with Self Reflection Self Reflection
Reasoning Data Generating Agent Reasoning Data
AI Agents with Reasoning Reasoning
Multimodal AI Agents Multimodal
AI Agents Workflow Workflow
Async AI Agents Async
Mini AI Agents Mini
AI Agents with Memory Memory
Repetitive Agents Repetitive
Introduction Introduction
Tools Overview Tools Overview
Custom Tools Custom Tools
Firecrawl Integration Firecrawl
User Interface UI
Crawl4AI Integration Crawl4AI
Chat Interface Chat
Code Interface Code
Mem0 Integration Mem0
Training Training
Realtime Voice Interface Realtime
Call Interface Call
Reasoning Extract Agents Reasoning Extract

๐Ÿ‘ฅ Contributing

We welcome contributions from the community! Here's how you can contribute:

  1. Fork on GitHub - Use the "Fork" button on the repository page
  2. Clone your fork - git clone https://github.com/yourusername/praisonAI.git
  3. Create a branch - git checkout -b new-feature
  4. Make changes and commit - git commit -am "Add some feature"
  5. Push to your fork - git push origin new-feature
  6. Submit a pull request - Via GitHub's web interface
  7. Await feedback - From project maintainers

๐Ÿ”ง Development

Using uv

# Install uv if you haven't already
pip install uv

# Install from requirements
uv pip install -r pyproject.toml

# Install with extras
uv pip install -r pyproject.toml --extra code
uv pip install -r pyproject.toml --extra "crewai,autogen"

Bump and Release

# From project root - bumps version and releases in one command
python src/praisonai/scripts/bump_and_release.py 2.2.99

# With praisonaiagents dependency
python src/praisonai/scripts/bump_and_release.py 2.2.99 --agents 0.0.169

# Then publish
cd src/praisonai && uv publish

โ“ FAQ & Troubleshooting

ModuleNotFoundError: No module named 'praisonaiagents'

Install the package:

pip install praisonaiagents
API key not found / Authentication error

Ensure your API key is set:

export OPENAI_API_KEY=your_key_here

For other providers, see Environment Variables.

How do I use a local model (Ollama)?
# Start Ollama server first
ollama serve

# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1

See Models docs for more details.

How do I persist conversations to a database?

Use the db parameter:

from praisonaiagents import Agent, db

agent = Agent(
    name="Assistant",
    db=db(database_url="postgresql://localhost/mydb"),
    session_id="my-session"
)

See Persistence docs for supported databases.

How do I enable agent memory?
from praisonaiagents import Agent

agent = Agent(
    name="Assistant",
    memory=True,  # Enables file-based memory (no extra deps!)
    user_id="user123"
)

See Memory docs for more options.

How do I run multiple agents together?
from praisonaiagents import Agent, Agents

agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = Agents(agents=[agent1, agent2])
agents.start()

See Agents docs for more examples.

How do I use MCP tools?
from praisonaiagents import Agent, MCP

agent = Agent(
    tools=MCP("npx @modelcontextprotocol/server-memory")
)

See MCP docs for all transport options.

Getting Help


Made with โค๏ธ by the PraisonAI Team

Documentation โ€ข GitHub โ€ข Issues

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

praisonai-4.5.43.tar.gz (1.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

praisonai-4.5.43-py3-none-any.whl (2.0 MB view details)

Uploaded Python 3

File details

Details for the file praisonai-4.5.43.tar.gz.

File metadata

  • Download URL: praisonai-4.5.43.tar.gz
  • Upload date:
  • Size: 1.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for praisonai-4.5.43.tar.gz
Algorithm Hash digest
SHA256 7bbb721f485974a243d90e9179602854e98c65f2b868889b78dc40c508e0eb95
MD5 379aa74b5b2f9d0f30129a0645f9b31c
BLAKE2b-256 1b9627753d9bea293dfe16efd927433323a348210ede171bb2669839164d5f54

See more details on using hashes here.

File details

Details for the file praisonai-4.5.43-py3-none-any.whl.

File metadata

  • Download URL: praisonai-4.5.43-py3-none-any.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for praisonai-4.5.43-py3-none-any.whl
Algorithm Hash digest
SHA256 519137b0afda35d54ae8aaa240abc543e2b9537d6320853ca8edd39d87c18f0a
MD5 6bd842c260e374110684a1547dda2dc9
BLAKE2b-256 cc7802f0fdd14771fe193d5d2fd047eab0e8e3296b1b70ccc1720c1bdb5bfdfa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page