Skip to main content

Automated evidence synthesis pipeline for systematic literature reviews

Project description

Papercutter Factory

Automated Evidence Synthesis Pipeline for Research

Papercutter Factory is a local, batch-processing pipeline designed to transform unstructured academic PDF collections into structured datasets and systematic review reports.

It addresses the specific tooling gap between reference managers (Zotero, Mendeley) and analysis software (R, Stata). Unlike generic "Chat with PDF" tools, Papercutter is architected for extraction reliability, reproducibility, and scale. It utilizes Docling to convert PDFs into structured Markdown and JSON before applying LLM-based extraction, ensuring tabular data and complex layouts are preserved.


Key Capabilities

  • Pipeline Architecture: A stateless, resumable workflow. Processing status is tracked per file, allowing large batches to be paused and resumed without data loss.
  • High-Fidelity Digitization: Utilizes IBM's Docling to convert PDFs into structured Markdown, preserving table geometry and section hierarchy better than standard text extraction.
  • Intelligent Splitting: Automatically detects large volumes (e.g., handbooks, dissertations) and splits them into chapter-level units for granular analysis.
  • Schema Validation (Pilot Mode): Includes a "Pilot Protocol" to test extraction schemas on a random sample. Includes source quotes for every extracted data point to verify accuracy before processing the full library.
  • Bibliographic Linking: Fuzzy-matches PDF contents to existing BibTeX records to ensure metadata consistency.

Installation

Papercutter is a comprehensive toolkit that relies on PyTorch and Docling for document layout analysis. A standard installation requires Python 3.10+.

pip install papercutter

System Requirements:

  • Hardware: A GPU is recommended for optimal OCR and layout analysis speed, though the system functions on CPU.
  • API Access: Requires an active API key for OpenAI (export OPENAI_API_KEY=...) or Anthropic.
  • Optional: Tesseract OCR (for legacy scanned documents).

Workflow Overview

The system operates in four distinct phases to ensure data integrity.

1. Ingest (Digitization)

Initializes the project structure and converts raw PDFs into a unified internal format.

# Initialize a new review project
papercutter init my_project

# Process PDFs and link to metadata
cd my_project
papercutter ingest ./raw_pdfs/ --bib references.bib
  • Process: Scans directories, identifies duplicates via SHA256, splits large volumes, and runs Docling conversion.
  • Metadata: If a BibTeX file is provided, PDFs are linked to citations via fuzzy title matching.

2. Configure (Schema Definition)

Defines the variables to be extracted from the literature.

papercutter configure
  • Process: The system analyzes abstracts from the ingested library and proposes a draft schema. The user generates a config.yaml file to enforce strict types on extracted data.

Example config.yaml:

columns:
  - key: sample_size
    description: "The total number of observations (N). Exclude year ranges."
    type: integer
  - key: estimation_method
    description: "The primary statistical strategy (e.g. DiD, RDD, OLS)."
    type: string
  - key: treatment_effect
    description: "The extracted coefficient for the main treatment."
    type: float

3. Grind (Extraction Loop)

Executes the LLM-based extraction and summarization.

# Step A: Pilot Run (Validation)
papercutter grind --pilot
  • Processes a random 5-paper sample.
  • Generates a Traceability Report (pilot_matrix.csv) containing the extracted value alongside the exact quote from the text used to derive it. This allows researchers to audit LLM performance.
# Step B: Full Execution
papercutter grind --full
  • Processes the remaining library. This step is idempotent; already processed papers are skipped.

4. Report (Synthesis)

Compiles final artifacts for analysis and reading.

papercutter report
  • Outputs:
    • matrix.csv: A flattened dataset of all extracted variables, ready for import into R/Stata/Pandas.
    • systematic_review.pdf: A compiled LaTeX document containing:
      • Structured Summaries: One-page standardized syntheses of every paper.
      • Contribution Grid: A consolidated appendix layout for rapid comparison.

Project Structure

Papercutter enforces a standardized directory structure to manage state.

my_project/
├── input/                  # Raw PDF repository
├── config.yaml             # Extraction schema definition
├── .papercutter/           # Internal state (Markdown cache, Inventory)
└── output/
    ├── matrix.csv          # Final dataset for analysis
    ├── systematic_review.pdf
    └── pilot_trace.csv     # Audit trail for verification

Common Use Cases

Meta-Regression Analysis

Goal: Extract specific regression coefficients and standard errors from 50+ empirical papers. Workflow: Define coefficient, standard_error, and model_specification in the schema. Use the Pilot Mode to ensure the LLM distinguishes between "Main Results" and "Robustness Checks."

Large Volume Processing

Goal: Analyze a Handbook or multi-chapter Report. Workflow: The Ingest phase detects the volume size. The Splitter module separates chapters into individual units. The Report phase generates a "Flashcard" style appendix for rapid review.

Library Remediation

Goal: Organize a messy folder of PDFs with inconsistent filenames. Workflow: The Ingest phase uses header analysis to identify papers and links them to a clean BibTeX file, generating a structured inventory of the collection.


License

MIT License. Open for academic and commercial use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

papercutter-2.0.0.tar.gz (179.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

papercutter-2.0.0-py3-none-any.whl (218.4 kB view details)

Uploaded Python 3

File details

Details for the file papercutter-2.0.0.tar.gz.

File metadata

  • Download URL: papercutter-2.0.0.tar.gz
  • Upload date:
  • Size: 179.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for papercutter-2.0.0.tar.gz
Algorithm Hash digest
SHA256 840833b2cdee39c6a2198a4028567c8a5ee7c0420f8e0dfc7021df75bc588297
MD5 237a336fb78ed39b2d4eee8401722077
BLAKE2b-256 9dae80ef5986eeea181e09799293e4c9ddf8d5fb444257b21407a0f964f8a7e6

See more details on using hashes here.

File details

Details for the file papercutter-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: papercutter-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 218.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for papercutter-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6d3822d7d64db80f84ec6cc05f31265bcbf869fafa613a83437c420c7e3f89e1
MD5 ae87203aaf72bfc18d37cd10cdf3662d
BLAKE2b-256 78c83c6d59e7f7b59e5a252e4e94f85aeaaa178cf96ca69b6c59fad5df4965f5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page