Skip to main content

Workflow tools for paper extraction, review, and research automation.

Project description

ai-deepresearch-flow logo

ai-deepresearch-flow

From documents to deep research insight — automatically.

English | 中文

PyPI - Version


The Core Pain Points

  • OCR Chaos: Raw markdown from OCR tools is often broken -- tables drift, formulas break, and references are non-clickable.
  • Translation Nightmares: Translating technical papers often destroys code blocks, LaTeX formulas, and table structures.
  • Information Overload: Extracting structured insights (authors, venues, summaries) from hundreds of PDFs manually is impossible.
  • Context Switching: Managing PDFs, summaries, and translations in different windows kills focus.

The Solution

DeepResearch Flow provides a unified pipeline to Repair, Translate, Extract, and Serve your research library.

Key Features

  • Smart Extraction: Turn unstructured Markdown into schema-enforced JSON (summaries, metadata, Q&A) using LLMs (OpenAI, Claude, Gemini, etc.).
  • Precision Translation: Translate OCR Markdown to Chinese/Japanese (.zh.md, .ja.md) while freezing formulas, code, tables, and references. No more broken layout.
  • Local Knowledge DB: A high-performance local Web UI to browse papers with Split View (Source vs. Translated vs. Summary), full-text search, and multi-dimensional filtering.
  • OCR Post-Processing: Automatically fix broken references ([1] -> [^1]), merge split paragraphs, and standardize layouts.

Quick Start

1) Installation

# Recommended: using uv for speed
uv pip install deepresearch-flow

# Or standard pip
pip install deepresearch-flow

2) Configuration

Set up your LLM providers. We support OpenAI, Claude, Gemini, Ollama, and more.

cp config.example.toml config.toml
# Edit config.toml to add your API keys (e.g., env:OPENAI_API_KEY)

3) The "Zero to Hero" Workflow

Step 1: Extract Insights

Scan a folder of markdown files and extract structured summaries.

uv run deepresearch-flow paper extract \
  --input ./docs \
  --model openai/gpt-4o-mini \
  --prompt-template deep_read

Step 2: Translate Safely

Translate papers to Chinese, protecting LaTeX and tables.

uv run deepresearch-flow translator translate \
  --input ./docs \
  --target-lang zh \
  --model openai/gpt-4o-mini \
  --fix-level moderate

Step 3: Serve Your Database

Launch a local UI to read and manage your papers.

uv run deepresearch-flow paper db serve \
  --input paper_infos.json \
  --md-root ./docs \
  --md-translated-root ./docs \
  --host 127.0.0.1

Comprehensive Guide

1. Translator: OCR-Safe Translation

The translator module is built for scientific documents. It uses a node-based architecture to ensure stability.

  • Structure Protection: automatically detects and "freezes" code blocks, LaTeX ($$...$$), HTML tables, and images before sending text to the LLM.
  • OCR Repair: use --fix-level to merge broken paragraphs and convert text references ([1]) to clickable Markdown footnotes ([^1]).
  • Context-Aware: supports retries for failed chunks and falls back gracefully.
# Translate with structure protection and OCR repairs
uv run deepresearch-flow translator translate \
  --input ./paper.md \
  --target-lang ja \
  --fix-level aggressive \
  --model claude/claude-3-5-sonnet-20240620
2. Paper Extract: Structured Knowledge

Turn loose markdown files into a queryable database.

  • Templates: built-in prompts like simple, eight_questions, and deep_read guide the LLM to extract specific insights.
  • Async and throttled: precise control over concurrency (--max-concurrency) and rate limits (--sleep-every).
  • Incremental: skips already processed files; resumes from where you left off.
uv run deepresearch-flow paper extract \
  --input ./library \
  --output paper_data.json \
  --template-dir ./my-custom-prompts \
  --max-concurrency 10
3. Database and UI: Your Personal ArXiv

The db serve command creates a local research station.

  • Split View: read the original PDF/Markdown on the left and the Summary/Translation on the right.
  • Full Text Search: search by title, author, year, or content tags (tag:fpga year:2023..2024).
  • Stats: visualize publication trends and keyword frequencies.
  • PDF Viewer: built-in PDF.js viewer prevents cross-origin issues with local files.
uv run deepresearch-flow paper db serve \
  --input paper_infos.json \
  --pdf-root ./pdfs \
  --cache-dir .cache/db
4. Recognize: OCR Post-Processing

Tools to clean up raw outputs from OCR engines like MinerU.

  • Embed Images: convert local image links to Base64 for a portable single-file Markdown.
  • Unpack Images: extract Base64 images back to files.
  • Organize: flatten nested OCR output directories.
  • Fix: apply OCR fixes and rumdl formatting during organize, or as a standalone step.
uv run deepresearch-flow recognize md embed --input ./raw_ocr --output ./clean_md
# Organize MinerU output and apply OCR fixes
uv run deepresearch-flow recognize organize \
  --input ./mineru_outputs \
  --output-simple ./ocr_md \
  --fix

# Fix and format existing markdown outputs
uv run deepresearch-flow recognize fix \
  --input ./ocr_md \
  --output ./ocr_md_fixed

# Fix in place
uv run deepresearch-flow recognize fix \
  --input ./ocr_md \
  --in-place

Docker Support

Don't want to manage Python environments?

docker run --rm -v $(pwd):/app -it ghcr.io/nerdneilsfield/deepresearch-flow --help

Configuration

The config.toml is your control center. It supports:

  • Multiple Providers: mix and match OpenAI, DeepSeek (DashScope), Gemini, Claude, and Ollama.
  • Model Routing: explicit routing to specific models (--model provider/model_name).
  • Environment Variables: keep secrets safe using env:VAR_NAME syntax.

See config.example.toml for a full reference.


Built with love for the Open Science community.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepresearch_flow-0.3.0.tar.gz (5.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

deepresearch_flow-0.3.0-py3-none-any.whl (6.0 MB view details)

Uploaded Python 3

File details

Details for the file deepresearch_flow-0.3.0.tar.gz.

File metadata

  • Download URL: deepresearch_flow-0.3.0.tar.gz
  • Upload date:
  • Size: 5.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for deepresearch_flow-0.3.0.tar.gz
Algorithm Hash digest
SHA256 0cfcf69101841ae6b61ad5385f3e8972b660da306127ebc11e49fceb119df6d4
MD5 0f7b719d759de6c5437f36e65157a04b
BLAKE2b-256 534c10cbe983f369bff38d94a411e2dc792c77d29de84761c77201a47b604fbb

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepresearch_flow-0.3.0.tar.gz:

Publisher: push-to-pypi.yml on nerdneilsfield/ai-deepresearch-flow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file deepresearch_flow-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for deepresearch_flow-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7a2b08d918ba926fa530609a3c6c99a3147ad9c1de4630720c4ac8fde38b4432
MD5 6e70d84bdcb4dd01a742384627160bed
BLAKE2b-256 3db89ad4b65e27c99f9301e83cea4f295f718d7d3faf821f4fb86781cd9efb7f

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepresearch_flow-0.3.0-py3-none-any.whl:

Publisher: push-to-pypi.yml on nerdneilsfield/ai-deepresearch-flow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page