A CLI research agent for AI-related paper search, code discovery, PDF collection, and bilingual reports.
Project description
PaperPilot
PaperPilot is a CLI research agent for AI-related literature review. It turns a natural-language research request into a verified paper corpus, code/PDF collection, evidence-grounded synthesis, and bilingual reports in Markdown, HTML, and PDF.
It is designed as a file-system based research workflow, not a chatbot. Each run creates a self-contained run folder with state, logs, intermediate artifacts, evidence checks, and final reports.
Highlights
- Natural-language research intake with LLM-assisted query understanding.
- Rich interactive CLI with startup model/source status,
/model,/sources, and structured confirmation panels. - Layered Source Registry with arXiv, Semantic Scholar, OpenAlex, Crossref, OpenReview, PubMed, Europe PMC, bioRxiv, medRxiv, DBLP, ACL Anthology, and optional API-key sources.
- Local corpus import with
--user-corpusfor PDF, BibTeX, RIS, Markdown, and text files. - Research protocol generation with inclusion/exclusion criteria and negative keywords.
- Corpus normalization, DOI/arXiv/title-similarity deduplication, ranking, and relevance screening.
- Code repository detection for GitHub, GitLab, Hugging Face, and project pages.
- Open-access PDF download only; no paywall bypassing.
- Full-text extraction for downloaded PDFs.
- Prompt Registry, Tool Registry, Capability Registry, and event logging.
- Evidence ledger that maps report-level claims to numbered paper citations.
- Review-agent checks for source verification, relevance, citation compliance, and overclaiming risk.
- Canonical bilingual report model with aligned Chinese/English Markdown, HTML, and PDF outputs.
Installation
From PyPI:
python -m pip install paperpilot -i https://pypi.org/simple
For local development:
git clone https://github.com/CHB-learner/PaperPilot.git
cd PaperPilot
python -m pip install -e .
LLM Configuration
PaperPilot requires an OpenAI-compatible LLM configuration for query understanding, planning, screening, synthesis, and report generation.
Interactive setup:
PaperPilot
Manual setup:
PaperPilot config set --base-url https://api.deepseek.com --model deepseek-chat
PaperPilot config import ./api.json
PaperPilot config list
PaperPilot config use deepseek
PaperPilot config show
Optional source API keys:
PaperPilot sources list
PaperPilot sources config core
PaperPilot sources config lens
PaperPilot sources enable core
PaperPilot sources test core
Inside interactive mode, use /sources to view the same source/API status table without leaving the session.
Configuration is stored in:
~/.paperpilot/config.json
Configuration priority:
- Environment variables:
OPENAI_API_KEY,OPENAI_BASE_URL,OPENAI_MODEL - User config:
~/.paperpilot/config.json - Legacy project file:
llmapi.txt
Do not commit api.json, llmapi.txt, .env, or any file containing API keys.
Quick Start
Interactive mode:
PaperPilot
The interactive shell shows the active LLM profile, model API status, free-source coverage, optional API-key source coverage, and quick commands:
/model manage LLM profiles
/sources inspect enabled and optional search sources
/help show the startup guide again
exit quit
Command mode:
PaperPilot "RNA inverse folding sequence design" \
--auto-confirm \
--max-papers 50 \
--since-year 2021 \
--github-filter required \
--sources auto \
--mode apa \
--quality balanced
Use local papers as seed corpus:
PaperPilot "RNA inverse folding sequence design" \
--auto-confirm \
--user-corpus ./papers \
--user-corpus references.bib
Skip PDF downloads:
PaperPilot "vision language model" --auto-confirm --no-download
Inspect or rerun a task:
PaperPilot inspect runs/<task-id>
PaperPilot resume runs/<task-id>
Architecture
PaperPilot follows a state-machine workflow:
Intake -> Protocol -> Search -> Corpus -> Screening -> Verification -> Synthesis -> Review -> Report
flowchart LR
U[User request<br/>topic + params + local corpus] --> C[Run context<br/>task/state/events]
C --> P[Prompt Registry]
P --> QA[Query Understanding Agent]
QA --> PL[Planner Agent]
PL --> RP[Research Protocol Agent]
RP --> ST[Source Registry<br/>arXiv / S2 / OpenAlex / Crossref / OpenReview<br/>PubMed / Europe PMC / bioRxiv / medRxiv / DBLP / ACL]
U --> LC[Local Corpus Import]
LC --> CB[Corpus Builder]
ST --> CB
CB --> RJ[Relevance Judge<br/>core / adjacent / exclude]
RJ --> VF[Verification + PDF Tools]
VF --> LM[Literature Matrix]
LM --> SA[Synthesis Agent]
SA --> QG[Quality Gate + Reflection]
QG --> EL[Evidence Ledger<br/>claim -> citation]
EL --> RA[Review Agents<br/>source / citation / overclaiming]
RA --> CR[Canonical Report]
CR --> OUT[ZH/EN Markdown<br/>ZH/EN HTML<br/>ZH/EN PDF]
Default free sources include arXiv, Semantic Scholar, OpenAlex, Crossref, OpenReview, PubMed, Europe PMC, bioRxiv, medRxiv, DBLP, and ACL Anthology. Optional API-key sources include CORE, Lens.org, IEEE Xplore, Springer Nature, Elsevier/Scopus, and Dimensions.
The repository also includes an HTML architecture overview:
paperpilot_agent_flow.html
Output Artifacts
Each run writes a folder under runs/<task-id>/ unless --output-dir is provided.
Core run files:
task.json: task metadata and parameters.state.json: stage status.events.jsonl: stage event stream.manifest.json: generated artifact list.prompt_manifest.json: versioned prompt roles and required JSON keys.registries.json: built-in ToolRegistry and CapabilityRegistry.source_diagnostics.json: enabled sources, returned counts, and source-level errors.
Search and corpus files:
query_understanding.md: keyword interpretation and ambiguity analysis.plan.json: search plan and diversified queries.protocol.json: research question, scope, inclusion/exclusion criteria, negative keywords.metadata.json: normalized raw search candidates.user_corpus_log.json: local corpus import log.corpus.json: screened full corpus.core_papers.json: core papers.adjacent_papers.json: adjacent papers.excluded_papers.json: excluded papers and reasons.ranked_papers.json: final report-view papers.
Evidence and quality files:
verification.json: DOI, URL, PDF, and code status.download_log.json: PDF download status.fulltext/: extracted PDF text.paper_notes.json: full-text extraction metadata.literature_matrix.json: method/task/evidence matrix.synthesis.json: field overview, method taxonomy, paper summaries, trends, gaps.quality_gate.json: pass/retry/needs-user-attention verdict.reflection.json: search quality reflection and retry hints.evidence_ledger.json: claim-level evidence ledger.review_agent_findings.json: review-agent checks.
Final reports:
report.canonical.json: shared bilingual report model and citation map.report.zh.mdreport.en.mdreport.zh.htmlreport.en.htmlreport.zh.pdfreport.en.pdfpdfs/: downloaded open-access PDFs.
GitHub / Code Filter
PaperPilot "retrieval augmented generation" --auto-confirm --github-filter required
Filter modes:
any: keep all papers and annotate code availability.required: final report view keeps papers with detected public code links; full screened corpus is still saved.none: final report view keeps papers without detected public code links.
CLI Options
--max-papers INT maximum papers in final report view
--since-year INT prefer papers since this year
--github-filter any|required|none
--github-search-limit INT active GitHub search limit
--no-download skip PDF downloads
--pdf-limit INT maximum PDFs to download
--user-corpus PATH import local corpus path; repeatable
--mode quick|apa|systematic
--interaction auto|gated
--quality fast|balanced|strict
--include-adjacent include adjacent papers in matrix/appendix
--sources auto|all|core|biomed|cs|configured
--enable-source SOURCE enable one additional source; repeatable
--disable-source SOURCE disable one source; repeatable
Development
Run tests:
python -m unittest discover -s tests
python -m compileall literature_agent
Build locally:
python -m pip install build twine
python -m build
python -m twine check dist/*
Publish to PyPI:
python -m twine upload dist/*
Open Source Notes
Before pushing to GitHub:
- Make sure
.gitignoreis present. - Do not commit API keys, local run outputs, build artifacts, or virtual environments.
- Add a
LICENSEfile before calling the project open source in a strict legal sense. - If any PyPI or LLM token was ever committed, revoke it immediately and create a new one.
Suggested first push:
git init
git add README.md README.zh-CN.md pyproject.toml literature_agent tests paperpilot_agent_flow.html .gitignore LICENSE
git commit -m "Initial open source release"
git branch -M main
git remote add origin https://github.com/CHB-learner/PaperPilot.git
git push -u origin main
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file paperpilot-1.3.1.tar.gz.
File metadata
- Download URL: paperpilot-1.3.1.tar.gz
- Upload date:
- Size: 87.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6fc535713aef02b6bfeeb274397de77a570611c5a4bd9dbe3b64496c8cef3840
|
|
| MD5 |
ef7050d2f27fde156928f0034ed54ff9
|
|
| BLAKE2b-256 |
e828bebe33fcb0730df4ab4d4f00169a75482d3b7003a3843ae64c886f6e6ac2
|
File details
Details for the file paperpilot-1.3.1-py3-none-any.whl.
File metadata
- Download URL: paperpilot-1.3.1-py3-none-any.whl
- Upload date:
- Size: 87.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ec79b9fd2f9c5c27359e2a68bb03560a9034a8bd0a941a148ab26abd5269521
|
|
| MD5 |
771a02e181eedb8505a582d08d984fa8
|
|
| BLAKE2b-256 |
12f4db588318c2099b9a5f2d4782712e4e76a2330de0fb1daf565baa2dd17b87
|