PraisonAI is an AI Agents Framework with Self Reflection. PraisonAI application combines PraisonAI Agents, AutoGen, and CrewAI into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customisation, and efficient human-agent collaboration.
Project description
PraisonAI is a production-ready Multi-AI Agents framework with self-reflection, designed to create AI Agents to automate and solve problems ranging from simple tasks to complex challenges. By integrating PraisonAI Agents, AG2 (Formerly AutoGen), and CrewAI into a low-code solution, it streamlines the building and management of multi-agent LLM systems, emphasising simplicity, customisation, and effective human-agent collaboration.
Key Features
- ๐ค Automated AI Agents Creation
- ๐ Self Reflection AI Agents
- ๐ง Reasoning AI Agents
- ๐๏ธ Multi Modal AI Agents
- ๐ค Multi Agent Collaboration
- ๐ญ AI Agent Workflow
- ๐ Add Custom Knowledge
- ๐ง Agents with Short and Long Term Memory
- ๐ Chat with PDF Agents
- ๐ป Code Interpreter Agents
- ๐ RAG Agents
- ๐ค Async & Parallel Processing
- ๐ Auto Agents
- ๐ข Math Agents
- ๐ฏ Structured Output Agents
- ๐ LangChain Integrated Agents
- ๐ Callback Agents
- ๐ค Mini AI Agents
- ๐ ๏ธ 100+ Custom Tools
- ๐ YAML Configuration
- ๐ฏ 100+ LLM Support
- ๐ฌ Deep Research Agents (OpenAI & Gemini)
- ๐ Query Rewriter Agent (HyDE, Step-back, Multi-query)
Using Python Code
Light weight package dedicated for coding:
pip install praisonaiagents
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
1. Single Agent
Create app.py file and add the code below:
from praisonaiagents import Agent
agent = Agent(instructions="Your are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")
Run:
python app.py
2. Multi Agents
Create app.py file and add the code below:
from praisonaiagents import Agent, PraisonAIAgents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = PraisonAIAgents(agents=[research_agent, summarise_agent])
agents.start()
Run:
python app.py
3. Deep Research Agent
Automated research with real-time streaming, web search, and citations using OpenAI or Gemini Deep Research APIs.
from praisonaiagents import DeepResearchAgent
# OpenAI Deep Research
agent = DeepResearchAgent(
model="o4-mini-deep-research", # or "o3-deep-research"
verbose=True
)
result = agent.research("What are the latest AI trends in 2025?")
print(result.report)
print(f"Citations: {len(result.citations)}")
# Gemini Deep Research
from praisonaiagents import DeepResearchAgent
agent = DeepResearchAgent(
model="deep-research-pro", # Auto-detected as Gemini
verbose=True
)
result = agent.research("Research quantum computing advances")
print(result.report)
Features:
- ๐ Multi-provider support (OpenAI, Gemini, LiteLLM)
- ๐ก Real-time streaming with reasoning summaries
- ๐ Structured citations with URLs
- ๐ ๏ธ Built-in tools: web search, code interpreter, MCP, file search
- ๐ Automatic provider detection from model name
4. Query Rewriter Agent
Transform user queries to improve RAG retrieval quality using multiple strategies.
from praisonaiagents import QueryRewriterAgent, RewriteStrategy
agent = QueryRewriterAgent(model="gpt-4o-mini")
# Basic - expands abbreviations, adds context
result = agent.rewrite("AI trends")
print(result.primary_query) # "What are the current trends in Artificial Intelligence?"
# HyDE - generates hypothetical document for semantic matching
result = agent.rewrite("What is quantum computing?", strategy=RewriteStrategy.HYDE)
# Step-back - generates broader context question
result = agent.rewrite("GPT-4 vs Claude 3?", strategy=RewriteStrategy.STEP_BACK)
# Sub-queries - decomposes complex questions
result = agent.rewrite("RAG setup and best embedding models?", strategy=RewriteStrategy.SUB_QUERIES)
# Contextual - resolves references using chat history
result = agent.rewrite("What about cost?", chat_history=[...])
Strategies:
- BASIC: Expand abbreviations, fix typos, add context
- HYDE: Generate hypothetical document for semantic matching
- STEP_BACK: Generate higher-level concept questions
- SUB_QUERIES: Decompose multi-part questions
- MULTI_QUERY: Generate multiple paraphrased versions
- CONTEXTUAL: Resolve references using conversation history
- AUTO: Automatically detect best strategy
Using No Code
Auto Mode:
pip install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
praisonai --auto create a movie script about Robots in Mars
Query Rewriting (works with any command):
# Rewrite query for better results (uses QueryRewriterAgent)
praisonai "AI trends" --query-rewrite
# Rewrite with search tools (agent decides when to search)
praisonai "latest developments" --query-rewrite --rewrite-tools "internet_search"
# Works with any prompt
praisonai "explain quantum computing" --query-rewrite -v
Deep Research CLI:
# Default: OpenAI (o4-mini-deep-research)
praisonai research "What are the latest AI trends in 2025?"
# Use Gemini
praisonai research --model deep-research-pro "Your research query"
# Rewrite query before research
praisonai research --query-rewrite "AI trends"
# Rewrite with search tools
praisonai research --query-rewrite --rewrite-tools "internet_search" "AI trends"
# Use custom tools from file (gathers context before deep research)
praisonai research --tools tools.py "Your research query"
praisonai research -t my_tools.py "Your research query"
# Use built-in tools by name (comma-separated)
praisonai research --tools "internet_search,wiki_search" "Your query"
praisonai research -t "yfinance,calculator_tools" "Stock analysis query"
# Save output to file (output/research/{query}.md)
praisonai research --save "Your research query"
praisonai research -s "Your research query"
# Combine options
praisonai research --query-rewrite --tools tools.py --save "Your research query"
# Verbose mode (show debug logs)
praisonai research -v "Your research query"
Using JavaScript Code
npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');
Star History
AI Agents Flow
graph LR
%% Define the main flow
Start([โถ Start]) --> Agent1
Agent1 --> Process[โ Process]
Process --> Agent2
Agent2 --> Output([โ Output])
Process -.-> Agent1
%% Define subgraphs for agents and their tasks
subgraph Agent1[ ]
Task1[๐ Task]
AgentIcon1[๐ค AI Agent]
Tools1[๐ง Tools]
Task1 --- AgentIcon1
AgentIcon1 --- Tools1
end
subgraph Agent2[ ]
Task2[๐ Task]
AgentIcon2[๐ค AI Agent]
Tools2[๐ง Tools]
Task2 --- AgentIcon2
AgentIcon2 --- Tools2
end
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef tools fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Start,Output,Task1,Task2 input
class Process,AgentIcon1,AgentIcon2 process
class Tools1,Tools2 tools
class Agent1,Agent2 transparent
AI Agents with Tools
Create AI agents that can use tools to interact with external systems and perform actions.
flowchart TB
subgraph Tools
direction TB
T3[Internet Search]
T1[Code Execution]
T2[Formatting]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
T3 --> A1
T1 --> A2
T2 --> A3
style Tools fill:#189AB4,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
AI Agents with Memory
Create AI agents with memory capabilities for maintaining context and information across tasks.
flowchart TB
subgraph Memory
direction TB
STM[Short Term]
LTM[Long Term]
end
subgraph Store
direction TB
DB[(Vector DB)]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
Memory <--> Store
Store <--> A1
Store <--> A2
Store <--> A3
style Memory fill:#189AB4,color:#fff
style Store fill:#2E8B57,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
AI Agents with Different Processes
Sequential Process
The simplest form of task execution where tasks are performed one after another.
graph LR
Input[Input] --> A1
subgraph Agents
direction LR
A1[Agent 1] --> A2[Agent 2] --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class A1,A2,A3 process
class Agents transparent
Hierarchical Process
Uses a manager agent to coordinate task execution and agent assignments.
graph TB
Input[Input] --> Manager
subgraph Agents
Manager[Manager Agent]
subgraph Workers
direction LR
W1[Worker 1]
W2[Worker 2]
W3[Worker 3]
end
Manager --> W1
Manager --> W2
Manager --> W3
end
W1 --> Manager
W2 --> Manager
W3 --> Manager
Manager --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Manager,W1,W2,W3 process
class Agents,Workers transparent
Workflow Process
Advanced process type supporting complex task relationships and conditional execution.
graph LR
Input[Input] --> Start
subgraph Workflow
direction LR
Start[Start] --> C1{Condition}
C1 --> |Yes| A1[Agent 1]
C1 --> |No| A2[Agent 2]
A1 --> Join
A2 --> Join
Join --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef decision fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Start,A1,A2,A3,Join process
class C1 decision
class Workflow transparent
Agentic Routing Workflow
Create AI agents that can dynamically route tasks to specialized LLM instances.
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Out[Out]
LLM2 --> Out
LLM3 --> Out
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Agentic Orchestrator Worker
Create AI agents that orchestrate and distribute tasks among specialized workers.
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Synthesizer[Synthesizer]
LLM2 --> Synthesizer
LLM3 --> Synthesizer
Synthesizer --> Out[Out]
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Synthesizer fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Agentic Autonomous Workflow
Create AI agents that can autonomously monitor, act, and adapt based on environment feedback.
flowchart LR
Human[Human] <--> LLM[LLM Call]
LLM -->|ACTION| Environment[Environment]
Environment -->|FEEDBACK| LLM
LLM --> Stop[Stop]
style Human fill:#8B0000,color:#fff
style LLM fill:#2E8B57,color:#fff
style Environment fill:#8B0000,color:#fff
style Stop fill:#333,color:#fff
Agentic Parallelization
Create AI agents that can execute tasks in parallel for improved performance.
flowchart LR
In[In] --> LLM2[LLM Call 2]
In --> LLM1[LLM Call 1]
In --> LLM3[LLM Call 3]
LLM1 --> Aggregator[Aggregator]
LLM2 --> Aggregator
LLM3 --> Aggregator
Aggregator --> Out[Out]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Aggregator fill:#fff,color:#000
style Out fill:#8B0000,color:#fff
Agentic Prompt Chaining
Create AI agents with sequential prompt chaining for complex workflows.
flowchart LR
In[In] --> LLM1[LLM Call 1] --> Gate{Gate}
Gate -->|Pass| LLM2[LLM Call 2] -->|Output 2| LLM3[LLM Call 3] --> Out[Out]
Gate -->|Fail| Exit[Exit]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
style Exit fill:#8B0000,color:#fff
Agentic Evaluator Optimizer
Create AI agents that can generate and optimize solutions through iterative feedback.
flowchart LR
In[In] --> Generator[LLM Call Generator]
Generator -->|SOLUTION| Evaluator[LLM Call Evaluator] -->|ACCEPTED| Out[Out]
Evaluator -->|REJECTED + FEEDBACK| Generator
style In fill:#8B0000,color:#fff
style Generator fill:#2E8B57,color:#fff
style Evaluator fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Repetitive Agents
Create AI agents that can efficiently handle repetitive tasks through automated loops.
flowchart LR
In[Input] --> LoopAgent[("Looping Agent")]
LoopAgent --> Task[Task]
Task --> |Next iteration| LoopAgent
Task --> |Done| Out[Output]
style In fill:#8B0000,color:#fff
style LoopAgent fill:#2E8B57,color:#fff,shape:circle
style Task fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Adding Models
Ollama Integration
export OPENAI_BASE_URL=http://localhost:11434/v1
Groq Integration
Replace xxxx with Groq API KEY:
export OPENAI_API_KEY=xxxxxxxxxxx
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
No Code Options
Agents Playbook
Simple Playbook Example
Create agents.yaml file and add the code below:
framework: praisonai
topic: Artificial Intelligence
roles:
screenwriter:
backstory: "Skilled in crafting scripts with engaging dialogue about {topic}."
goal: Create scripts from concepts.
role: Screenwriter
tasks:
scriptwriting_task:
description: "Develop scripts with compelling characters and dialogue about {topic}."
expected_output: "Complete script ready for production."
To run the playbook:
praisonai agents.yaml
Use 100+ Models
Custom Tools
Using @tool Decorator
from praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")
Using BaseTool Class
from praisonaiagents import Agent, BaseTool
class WeatherTool(BaseTool):
name = "weather"
description = "Get current weather for a location"
def run(self, location: str) -> str:
return f"Weather in {location}: 72ยฐF, Sunny"
agent = Agent(
instructions="You are a weather assistant",
tools=[WeatherTool()]
)
agent.start("What's the weather in Paris?")
Creating a Tool Package (pip installable)
# pyproject.toml
[project]
name = "my-praisonai-tools"
version = "1.0.0"
dependencies = ["praisonaiagents"]
[project.entry-points."praisonaiagents.tools"]
my_tool = "my_package:MyTool"
# my_package/__init__.py
from praisonaiagents import BaseTool
class MyTool(BaseTool):
name = "my_tool"
description = "My custom tool"
def run(self, param: str) -> str:
return f"Result: {param}"
After pip install, tools are auto-discovered:
agent = Agent(tools=["my_tool"]) # Works automatically!
Prompt Expansion
Expand short prompts into detailed, actionable prompts:
CLI Usage
# Expand a short prompt into detailed prompt
praisonai "write a movie script in 3 lines" --expand-prompt
# With verbose output
praisonai "blog about AI" --expand-prompt -v
# With tools for context gathering
praisonai "latest AI trends" --expand-prompt --expand-tools tools.py
# Combine with query rewrite
praisonai "AI news" --query-rewrite --expand-prompt
Programmatic Usage
from praisonaiagents import PromptExpanderAgent, ExpandStrategy
# Basic usage
agent = PromptExpanderAgent()
result = agent.expand("write a movie script in 3 lines")
print(result.expanded_prompt)
# With specific strategy
result = agent.expand("blog about AI", strategy=ExpandStrategy.DETAILED)
# Available strategies: BASIC, DETAILED, STRUCTURED, CREATIVE, AUTO
Key Difference:
--query-rewrite: Optimizes queries for search/retrieval (RAG)--expand-prompt: Expands prompts for detailed task execution
Development:
Below is used for development only.
Using uv
# Install uv if you haven't already
pip install uv
# Install from requirements
uv pip install -r pyproject.toml
# Install with extras
uv pip install -r pyproject.toml --extra code
uv pip install -r pyproject.toml --extra "crewai,autogen"
Bump and Release
# From project root - bumps version and releases in one command
python src/praisonai/scripts/bump_and_release.py 2.2.99
# With praisonaiagents dependency
python src/praisonai/scripts/bump_and_release.py 2.2.99 --agents 0.0.169
# Then publish
cd src/praisonai && uv publish
Contributing
- Fork on GitHub: Use the "Fork" button on the repository page.
- Clone your fork:
git clone https://github.com/yourusername/praisonAI.git - Create a branch:
git checkout -b new-feature - Make changes and commit:
git commit -am "Add some feature" - Push to your fork:
git push origin new-feature - Submit a pull request via GitHub's web interface.
- Await feedback from project maintainers.
Other Features
- ๐ Use CrewAI or AG2 (Formerly AutoGen) Framework
- ๐ป Chat with ENTIRE Codebase
- ๐จ Interactive UIs
- ๐ YAML-based Configuration
- ๐ ๏ธ Custom Tool Integration
- ๐ Internet Search Capability (using Crawl4AI and Tavily)
- ๐ผ๏ธ Vision Language Model (VLM) Support
- ๐๏ธ Real-time Voice Interaction
Video Tutorials
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file praisonai-2.3.0.tar.gz.
File metadata
- Download URL: praisonai-2.3.0.tar.gz
- Upload date:
- Size: 119.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8f8c2f97371e9518eef2348130b2972b60834f037bd2cb946207bcc1323cf5c3
|
|
| MD5 |
080200b275a9a6a044507622884990d0
|
|
| BLAKE2b-256 |
2a6acc3f1e4d77cc90d641151dd61f2c352c375eeeaba3a3b7d3358afcc14bf0
|
File details
Details for the file praisonai-2.3.0-py3-none-any.whl.
File metadata
- Download URL: praisonai-2.3.0-py3-none-any.whl
- Upload date:
- Size: 123.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
47f66b1d22bf9729f8e346e426947678e43a5ae8e19177479dd14dd1a831b6b3
|
|
| MD5 |
e6ae20495d6408b6c46ecf89d59b9b27
|
|
| BLAKE2b-256 |
abcabf5eea27c7d7140c35ca83035b1669a93c884a6bc9ffede264a3cc16ece4
|