A research automation tool that searches and analyzes web content, papers, blogs, and social media to generate comprehensive reports for given queries
Project description
insightflow
Automated research tool that searches and analyzes web content, papers, blogs, and social media to generate comprehensive reports for any query.
Leverages multiple LLM providers (OpenAI, Google, Perplexity, xAI) via OpenRouter to automate multi-perspective research on any topic.
How It Works
Topic
|
[1] Aspect Extraction (openai/gpt-4.1-mini)
|
+-- Aspect A --> [2] Parallel Web Search (perplexity/sonar-reasoning-pro) --> Report A
+-- Aspect B --> [2] Parallel Web Search --> Report B
+-- Aspect C --> [2] Parallel Web Search --> Report C
|
[3] Report Synthesis (google/gemini-3-flash-preview)
|
Final Report (Markdown + Citations)
- Aspect Extraction - Identifies key perspectives of the topic using an LLM
- Parallel Search - Searches each aspect concurrently via web-connected models
- Report Synthesis - Merges all findings into a single, cited Markdown report
Prerequisites
- Python 3.12+
- OpenRouter API key
Installation
pip install insightflow
Or with uv:
uv add insightflow
Extras
| Extra | Description | pip | uv |
|---|---|---|---|
cli |
CLI interface (Typer) | pip install "insightflow[cli]" |
uv add "insightflow[cli]" |
api |
REST API server (FastAPI) | pip install "insightflow[api]" |
uv add "insightflow[api]" |
mcp |
MCP server (Claude Code, etc.) | pip install "insightflow[mcp]" |
uv add "insightflow[mcp]" |
all |
All extras | pip install "insightflow[all]" |
uv add "insightflow[all]" |
Setup
Set your OpenRouter API key as an environment variable:
export OPENROUTER_API_KEY="sk-or-v1-..."
Or create a .env file in your project root:
OPENROUTER_API_KEY=sk-or-v1-...
Usage
Python Library
import asyncio
import os
from insightflow.core import research
from insightflow.models import LLMConfig
api_key = os.environ["OPENROUTER_API_KEY"]
report = asyncio.run(research(
topic="Recent trends in quantum computing",
api_key=api_key,
aspect_model=LLMConfig(model="openai/gpt-4.1-mini"),
search_model=LLMConfig(model="perplexity/sonar-reasoning-pro"),
report_model=LLMConfig(model="google/gemini-3-flash-preview"),
))
print(report.content) # Markdown report
print(report.citations) # List of citations
print(report.metadata) # Elapsed time, model used, etc.
Individual functions are also available:
from insightflow.core import generate_queries, search, compose, build_aspect_prompt
from insightflow.models import LLMConfig
# Query generation (aspect extraction)
result = asyncio.run(generate_queries(
topic="ML optimization techniques",
api_key=api_key,
system_prompt=build_aspect_prompt(max_aspects=5),
config=LLMConfig(model="openai/gpt-4.1-mini"),
))
# Single search query
result = asyncio.run(search(
query="Python packaging best practices",
api_key=api_key,
config=LLMConfig(model="perplexity/sonar-reasoning-pro"),
))
CLI
pip install "insightflow[cli]" # or: uv add "insightflow[cli]"
# Full research
insightflow research "Recent trends in quantum computing"
# Aspect extraction only
insightflow aspects "ML optimization techniques"
# Single search
insightflow search "Python packaging best practices"
# With options
insightflow research "AI safety" \
--language english \
--max-aspects 3 \
--search-model perplexity/sonar-pro \
--json \
-o report.json
Run insightflow --help for all available options.
Tip: You can also run insightflow without installing via uvx:
uvx --from "insightflow[cli]" insightflow research "Recent trends in quantum computing"
REST API Server
pip install "insightflow[api]" # or: uv add "insightflow[api]"
python -m uvicorn insightflow.interfaces.api:app
Swagger UI is available at http://localhost:8000/docs.
curl -X POST http://localhost:8000/research \
-H "Content-Type: application/json" \
-d '{"topic": "quantum computing", "language": "english"}'
MCP Server (Claude Code Integration)
Use insightflow as an MCP tool in Claude Code so that Claude can autonomously run research.
1. Install & Register
pip install "insightflow[mcp]" # or: uv add "insightflow[mcp]"
claude mcp add --transport stdio \
--env OPENROUTER_API_KEY=sk-or-v1-... \
insightflow -- python -m insightflow.interfaces.mcp
With uvx (no local install required):
claude mcp add --transport stdio \
--env OPENROUTER_API_KEY=sk-or-v1-... \
insightflow -- uvx --from "insightflow[mcp]" python -m insightflow.interfaces.mcp
Manual configuration (alternative)
Instead of
claude mcp add, you can edit~/.claude.jsondirectly:{ "mcpServers": { "insightflow": { "command": "python", "args": ["-m", "insightflow.interfaces.mcp"], "env": { "OPENROUTER_API_KEY": "sk-or-v1-..." } } } }You can also use
.mcp.jsonin your project root for team sharing, but do not include API keys in it — use~/.claude.jsonfor secrets.
2. Use from Claude Code
Once configured, Claude Code recognizes the research tool. Ask Claude to research a topic in conversation and it will call insightflow automatically.
Tool parameters:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
topic |
string | Yes | - | Research topic |
aspect_model |
string | No | openai/gpt-4.1-mini |
Model for aspect extraction |
search_model |
string | No | perplexity/sonar-reasoning-pro |
Model for search |
report_model |
string | No | google/gemini-3-flash-preview |
Model for report generation |
language |
string | No | japanese |
Output language |
max_aspects |
integer | No | 5 |
Maximum number of aspects |
concurrency |
integer | No | 3 |
Maximum concurrent searches |
Configuration
Configurable via environment variables or .env file:
| Variable | Required | Default | Description |
|---|---|---|---|
OPENROUTER_API_KEY |
Yes | - | OpenRouter API key |
DEFAULT_QUERY_MODEL |
No | openai/gpt-4.1-mini |
Default query/aspect extraction model |
DEFAULT_SEARCH_MODEL |
No | perplexity/sonar-reasoning-pro |
Default search model |
DEFAULT_REPORT_MODEL |
No | google/gemini-3-flash-preview |
Default report generation model |
DEFAULT_LANGUAGE |
No | japanese |
Default output language |
DEFAULT_MAX_ASPECTS |
No | 5 |
Default number of aspects |
DEFAULT_CONCURRENCY |
No | 3 |
Default concurrent searches |
Development
git clone https://github.com/sync-dev-org/insightflow.git
cd insightflow
uv sync --all-extras
# Test
uv run pytest
# Lint
uv run ruff check src/
uv run ruff format src/
License
MIT License - See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file insightflow-0.2.0.tar.gz.
File metadata
- Download URL: insightflow-0.2.0.tar.gz
- Upload date:
- Size: 29.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5108db28a83299620f1c060be6aa607e6bda0cf400e299d3a9ca0ef1d737c3be
|
|
| MD5 |
d5181d93ffb6b34e2b5a446074539f74
|
|
| BLAKE2b-256 |
ceea6bb43b52763073a1f2403b37fb2b550572a1498e54b67f905ce5fce77884
|
File details
Details for the file insightflow-0.2.0-py3-none-any.whl.
File metadata
- Download URL: insightflow-0.2.0-py3-none-any.whl
- Upload date:
- Size: 35.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f8125cf85a84a4115f7b99058e4afe46f1ac29fbe8bfcef1436bd080afa9d27f
|
|
| MD5 |
337307ec635b65399057af8ca5fbc35f
|
|
| BLAKE2b-256 |
910a3360e34950d78624d4916a6429b384cfc1669596827b8aca612d2cc185ba
|