Skip to main content

Automated evidence synthesis pipeline for systematic literature reviews

Project description

Papercutter Factory

Automated Evidence Synthesis Pipeline for Research

Papercutter Factory is a local, batch-processing pipeline designed to transform unstructured academic PDF collections into structured datasets and systematic review reports.

It addresses the specific tooling gap between reference managers (Zotero, Mendeley) and analysis software (R, Stata). Unlike generic "Chat with PDF" tools, Papercutter is architected for extraction reliability, reproducibility, and scale. It utilizes Docling to convert PDFs into structured Markdown and JSON before applying LLM-based extraction, ensuring tabular data and complex layouts are preserved.


Key Capabilities

  • Pipeline Architecture: A stateless, resumable workflow. Processing status is tracked per file, allowing large batches to be paused and resumed without data loss.
  • High-Fidelity Digitization: Utilizes IBM's Docling to convert PDFs into structured Markdown, preserving table geometry and section hierarchy better than standard text extraction.
  • Intelligent Splitting: Automatically detects large volumes (e.g., handbooks, dissertations) and splits them into chapter-level units for granular analysis.
  • Schema Validation (Pilot Mode): Includes a "Pilot Protocol" to test extraction schemas on a random sample. Includes source quotes for every extracted data point to verify accuracy before processing the full library.
  • Bibliographic Linking: Fuzzy-matches PDF contents to existing BibTeX records to ensure metadata consistency.

Installation

Papercutter is a comprehensive toolkit that relies on PyTorch and Docling for document layout analysis. A standard installation requires Python 3.10+.

pip install papercutter

System Requirements:

  • Hardware: A GPU is recommended for optimal OCR and layout analysis speed, though the system functions on CPU.
  • API Access: Requires an active API key for OpenAI (export OPENAI_API_KEY=...) or Anthropic.
  • Optional: Tesseract OCR (for legacy scanned documents).

Workflow Overview

The system operates in four distinct phases to ensure data integrity.

1. Ingest (Digitization)

Initializes the project structure and converts raw PDFs into a unified internal format.

# Initialize a new review project
papercutter init my_project

# Process PDFs and link to metadata
cd my_project
papercutter ingest ./raw_pdfs/ --bib references.bib
  • Process: Scans directories, identifies duplicates via SHA256, splits large volumes, and runs Docling conversion.
  • Metadata: If a BibTeX file is provided, PDFs are linked to citations via fuzzy title matching.

2. Configure (Schema Definition)

Defines the variables to be extracted from the literature.

papercutter configure
  • Process: The system analyzes abstracts from the ingested library and proposes a draft schema. The user generates a config.yaml file to enforce strict types on extracted data.

Example config.yaml:

columns:
  - key: sample_size
    description: "The total number of observations (N). Exclude year ranges."
    type: integer
  - key: estimation_method
    description: "The primary statistical strategy (e.g. DiD, RDD, OLS)."
    type: string
  - key: treatment_effect
    description: "The extracted coefficient for the main treatment."
    type: float

3. Grind (Extraction Loop)

Executes the LLM-based extraction and summarization.

# Step A: Pilot Run (Validation)
papercutter grind --pilot
  • Processes a random 5-paper sample.
  • Generates a Traceability Report (pilot_matrix.csv) containing the extracted value alongside the exact quote from the text used to derive it. This allows researchers to audit LLM performance.
# Step B: Full Execution
papercutter grind --full
  • Processes the remaining library. This step is idempotent; already processed papers are skipped.

4. Report (Synthesis)

Compiles final artifacts for analysis and reading.

papercutter report
  • Outputs:
    • matrix.csv: A flattened dataset of all extracted variables, ready for import into R/Stata/Pandas.
    • systematic_review.pdf: A compiled LaTeX document containing:
      • Structured Summaries: One-page standardized syntheses of every paper.
      • Contribution Grid: A consolidated appendix layout for rapid comparison.

Project Structure

Papercutter enforces a standardized directory structure to manage state.

my_project/
├── input/                  # Raw PDF repository
├── config.yaml             # Extraction schema definition
├── .papercutter/           # Internal state (Markdown cache, Inventory)
└── output/
    ├── matrix.csv          # Final dataset for analysis
    ├── systematic_review.pdf
    └── pilot_trace.csv     # Audit trail for verification

Common Use Cases

Meta-Regression Analysis

Goal: Extract specific regression coefficients and standard errors from 50+ empirical papers. Workflow: Define coefficient, standard_error, and model_specification in the schema. Use the Pilot Mode to ensure the LLM distinguishes between "Main Results" and "Robustness Checks."

Large Volume Processing

Goal: Analyze a Handbook or multi-chapter Report. Workflow: The Ingest phase detects the volume size. The Splitter module separates chapters into individual units. The Report phase generates a "Flashcard" style appendix for rapid review.

Library Remediation

Goal: Organize a messy folder of PDFs with inconsistent filenames. Workflow: The Ingest phase uses header analysis to identify papers and links them to a clean BibTeX file, generating a structured inventory of the collection.


License

MIT License. Open for academic and commercial use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

papercutter-2.0.1.tar.gz (179.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

papercutter-2.0.1-py3-none-any.whl (218.9 kB view details)

Uploaded Python 3

File details

Details for the file papercutter-2.0.1.tar.gz.

File metadata

  • Download URL: papercutter-2.0.1.tar.gz
  • Upload date:
  • Size: 179.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for papercutter-2.0.1.tar.gz
Algorithm Hash digest
SHA256 5c1dbd359bdc0d7529d03c2bce68870a80e47e911d3a45b651f8f60a6f394bb0
MD5 ebf74096ca9caee6c9df3132ae92bfcc
BLAKE2b-256 c71a0af80fdb9e1ba6c0e5e2a2adce787b0d782a3a4d06d9d04eff61f5d2c661

See more details on using hashes here.

File details

Details for the file papercutter-2.0.1-py3-none-any.whl.

File metadata

  • Download URL: papercutter-2.0.1-py3-none-any.whl
  • Upload date:
  • Size: 218.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for papercutter-2.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 91eaa30734827958405b6a73be57b5b3e28c8f63d55988b8a70a71466fa9f45d
MD5 f884b34a0c1236e880741504984596d1
BLAKE2b-256 aa75a2e5019a0c7934b8d19fbf0b875c875d0c3fe7b10cbc529ac04b1b794acf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page