Skip to main content

Automated evidence synthesis pipeline for systematic literature reviews

Project description

Papercutter Factory

Automated Evidence Synthesis Pipeline for Research

Papercutter Factory is a local, batch-processing pipeline designed to transform unstructured academic PDF collections into structured datasets and systematic review reports.

It addresses the specific tooling gap between reference managers (Zotero, Mendeley) and analysis software (R, Stata). Unlike generic "Chat with PDF" tools, Papercutter is architected for extraction reliability, reproducibility, and scale. It utilizes Docling to convert PDFs into structured Markdown and JSON before applying LLM-based extraction, ensuring tabular data and complex layouts are preserved.


Key Capabilities

  • Pipeline Architecture: A stateless, resumable workflow. Processing status is tracked per file, allowing large batches to be paused and resumed without data loss.
  • High-Fidelity Digitization: Utilizes IBM's Docling to convert PDFs into structured Markdown, preserving table geometry and section hierarchy better than standard text extraction.
  • Intelligent Splitting: Automatically detects large volumes (e.g., handbooks, dissertations) and splits them into chapter-level units for granular analysis.
  • Schema Validation (Pilot Mode): Includes a "Pilot Protocol" to test extraction schemas on a random sample. Includes source quotes for every extracted data point to verify accuracy before processing the full library.
  • Bibliographic Linking: Fuzzy-matches PDF contents to existing BibTeX records to ensure metadata consistency.

Installation

Papercutter is a comprehensive toolkit that relies on PyTorch and Docling for document layout analysis. A standard installation requires Python 3.10+.

pip install papercutter

System Requirements:

  • Hardware: A GPU is recommended for optimal OCR and layout analysis speed, though the system functions on CPU.
  • API Access: Requires an active API key for OpenAI (export OPENAI_API_KEY=...) or Anthropic.
  • Optional: Tesseract OCR (for legacy scanned documents).

Workflow Overview

The system operates in four distinct phases to ensure data integrity.

1. Ingest (Digitization)

Initializes the project structure and converts raw PDFs into a unified internal format.

# Initialize a new review project
papercutter init my_project

# Process PDFs and link to metadata
cd my_project
papercutter ingest ./raw_pdfs/ --bib references.bib
  • Process: Scans directories, identifies duplicates via SHA256, splits large volumes, and runs Docling conversion.
  • Metadata: If a BibTeX file is provided, PDFs are linked to citations via fuzzy title matching.

2. Configure (Schema Definition)

Defines the variables to be extracted from the literature.

papercutter configure
  • Process: The system analyzes abstracts from the ingested library and proposes a draft schema. The user generates a config.yaml file to enforce strict types on extracted data.

Example config.yaml:

columns:
  - key: sample_size
    description: "The total number of observations (N). Exclude year ranges."
    type: integer
  - key: estimation_method
    description: "The primary statistical strategy (e.g. DiD, RDD, OLS)."
    type: string
  - key: treatment_effect
    description: "The extracted coefficient for the main treatment."
    type: float

3. Grind (Extraction Loop)

Executes the LLM-based extraction and summarization.

# Step A: Pilot Run (Validation)
papercutter grind --pilot
  • Processes a random 5-paper sample.
  • Generates a Traceability Report (pilot_matrix.csv) containing the extracted value alongside the exact quote from the text used to derive it. This allows researchers to audit LLM performance.
# Step B: Full Execution
papercutter grind --full
  • Processes the remaining library. This step is idempotent; already processed papers are skipped.

4. Report (Synthesis)

Compiles final artifacts for analysis and reading.

papercutter report
  • Outputs:
    • matrix.csv: A flattened dataset of all extracted variables, ready for import into R/Stata/Pandas.
    • systematic_review.pdf: A compiled LaTeX document containing:
      • Structured Summaries: One-page standardized syntheses of every paper.
      • Contribution Grid: A consolidated appendix layout for rapid comparison.

Project Structure

Papercutter enforces a standardized directory structure to manage state.

my_project/
├── input/                  # Raw PDF repository
├── config.yaml             # Extraction schema definition
├── .papercutter/           # Internal state (Markdown cache, Inventory)
└── output/
    ├── matrix.csv          # Final dataset for analysis
    ├── systematic_review.pdf
    └── pilot_trace.csv     # Audit trail for verification

Common Use Cases

Meta-Regression Analysis

Goal: Extract specific regression coefficients and standard errors from 50+ empirical papers. Workflow: Define coefficient, standard_error, and model_specification in the schema. Use the Pilot Mode to ensure the LLM distinguishes between "Main Results" and "Robustness Checks."

Large Volume Processing

Goal: Analyze a Handbook or multi-chapter Report. Workflow: The Ingest phase detects the volume size. The Splitter module separates chapters into individual units. The Report phase generates a "Flashcard" style appendix for rapid review.

Library Remediation

Goal: Organize a messy folder of PDFs with inconsistent filenames. Workflow: The Ingest phase uses header analysis to identify papers and links them to a clean BibTeX file, generating a structured inventory of the collection.


License

MIT License. Open for academic and commercial use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

papercutter-2.0.2.tar.gz (181.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

papercutter-2.0.2-py3-none-any.whl (220.6 kB view details)

Uploaded Python 3

File details

Details for the file papercutter-2.0.2.tar.gz.

File metadata

  • Download URL: papercutter-2.0.2.tar.gz
  • Upload date:
  • Size: 181.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for papercutter-2.0.2.tar.gz
Algorithm Hash digest
SHA256 8b544a8005dde48cfc9fcaa6552891328eaf9dea3852f1e17d00bcace49b3126
MD5 e0194af18a5bb534176df8979b127f2c
BLAKE2b-256 6032b4e01eb1dfec749737e76ff7bb54f7978c09f922ed1b28ade99180b9fa93

See more details on using hashes here.

File details

Details for the file papercutter-2.0.2-py3-none-any.whl.

File metadata

  • Download URL: papercutter-2.0.2-py3-none-any.whl
  • Upload date:
  • Size: 220.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for papercutter-2.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9c0b4e4a41491157b390d7f073a34849cee733973d243427967dc42902e2f27f
MD5 b7f44da8aa18b68da1f4982278588da6
BLAKE2b-256 8709006e06b80dab1dad051fb215839b0ece159609b8027ed4082ef767c1dbcf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page