Skip to main content

Documents and large language models.

Project description

pypi python Build Status codecov pdm-managed


Logo

Docprompt

Document AI, powered by LLM's
Explore the docs »

· Report Bug · Request Feature

About

Docprompt is a library for Document AI. It aims to make enterprise-level document analysis easy thanks to the zero-shot capability of large language models.

Supercharged Document Analysis

  • Common utilities for interacting with PDFs
    • PDF loading and serialization
    • PDF byte compression using Ghostscript :ghost:
    • Fast rasterization :fire: :rocket:
    • Page splitting, re-export with PDFium
    • Document Search, powered by Rust :fire:
  • Support for most OCR providers with batched inference
    • Google :white_check_mark:
    • Amazon Textract :white_check_mark:
    • Tesseract :white_check_mark:
    • Azure Document Intelligence :red_circle:
  • Layout Aware Page Representation
    • Run Document Layout Analysis with text-only LLM's!
  • Prompt Garden for common document analysis tasks zero-shot, including:
    • Markerization (Pdf2Markdown)
    • Table Extraction
    • Page Classification
    • Key-value extraction (Coming soon)
    • Segmentation (Coming soon)

Documents and large language models

Features

  • Representations for common document layout types - TextBlock, BoundingBox, etc
  • Generic implementations of OCR providers
  • Document Search powered by Rust and R-trees :fire:
  • Table Extraction, Page Classification, PDF2Markdown

Installation

Use the package manager pip to install Docprompt.

pip install docprompt

With an OCR provider

pip install "docprompt[google]

With search support

pip install "docprompt[search]"

Usage

Simple Operations

from docprompt import load_document

# Load a document
document = load_document("path/to/my.pdf")

# Rasterize a single page using Ghostscript
page_number = 5
rastered = document.rasterize_page(page_number, dpi=120)

# Split a pdf based on a page range
document_2 = document.split(start=125, stop=130)

Converting a PDF to markdown

Coverting documents into markdown is a great way to prepare documents for downstream chunking or ingestion into a RAG system.

from docprompt import load_document_node
from docprompt.tasks.markerize import AnthropicMarkerizeProvider

document_node = load_document_node("path/to/my.pdf")
markerize_provider = AnthropicMarkerizeProvider()

markerized_document = markerize_provider.process_document_node(document_node)

Extracting Tables

Extract tables with SOTA speed and accuracy.

from docprompt import load_document_node
from docprompt.tasks.table_extraction import AnthropicTableExtractionProvider

document_node = load_document_node("path/to/my.pdf")
table_extraction_provider = AnthropicTableExtractionProvider()

extracted_tables = table_extraction_provider.process_document_node(document_node)

Performing OCR

from docprompt import load_document, DocumentNode
from docprompt.tasks.ocr.gcp import GoogleOcrProvider

provider = GoogleOcrProvider.from_service_account_file(
  project_id=my_project_id,
  processor_id=my_processor_id,
  service_account_file=path_to_service_file
)

document = load_document("path/to/my.pdf")

# A container holds derived data for a document, like OCR or classification results
document_node = DocumentNode.from_document(document)

provider.process_document_node(document_node) # Caches results on the document_node

document_node[0].ocr_result # Access OCR results

Document Search

When a large language model returns a result, we might want to highlight that result for our users. However, language models return results as text, while what we need to show our users requires a page number and a bounding box.

After extracting text from a PDF, we can support this pattern using DocumentProvenanceLocator, which lives on a DocumentNode

from docprompt import load_document, DocumentNode
from docprompt.tasks.ocr.gcp import GoogleOcrProvider

provider = GoogleOcrProvider.from_service_account_file(
  project_id=my_project_id,
  processor_id=my_processor_id,
  service_account_file=path_to_service_file
)

document = load_document("path/to/my.pdf")

# A container holds derived data for a document, like OCR or classification results
document_node = DocumentNode.from_document(document)

provider.process_document_node(document_node) # Caches results on the document_node

# With OCR results available, we can now instantiate a locator and search through documents.

document_node.locator.search("John Doe") # This will return a list of all terms across the document that contain "John Doe"
document_node.locator.search("Jane Doe", page_number=4) # Just return results a list of matching results from page 4

This functionality uses a combination of rtree and the Rust library tantivy, allowing you to perform thousands of searches in seconds :fire: :rocket:

trackgit-views

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

docprompt-0.8.5.tar.gz (19.7 MB view details)

Uploaded Source

Built Distribution

docprompt-0.8.5-py3-none-any.whl (13.6 MB view details)

Uploaded Python 3

File details

Details for the file docprompt-0.8.5.tar.gz.

File metadata

  • Download URL: docprompt-0.8.5.tar.gz
  • Upload date:
  • Size: 19.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.19.1 CPython/3.10.12 Linux/6.8.0-45-generic

File hashes

Hashes for docprompt-0.8.5.tar.gz
Algorithm Hash digest
SHA256 1994dc20d168c0b9cbd2b5f2b4f79c7fcce717d8b1660856786802983d6704db
MD5 a65b163bda6688a5e65009f1a792feab
BLAKE2b-256 49a2376f899f02fcb069ae01c8743248349b10d75e987d8ae8ad5dd6ce1cdd19

See more details on using hashes here.

File details

Details for the file docprompt-0.8.5-py3-none-any.whl.

File metadata

  • Download URL: docprompt-0.8.5-py3-none-any.whl
  • Upload date:
  • Size: 13.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.19.1 CPython/3.10.12 Linux/6.8.0-45-generic

File hashes

Hashes for docprompt-0.8.5-py3-none-any.whl
Algorithm Hash digest
SHA256 282fa87bf1be458fd311893048e8e8107c89d3f3d7c9d3b4f626a47c63cd473c
MD5 3cd6198fa5e81bc01c097194dbbb025a
BLAKE2b-256 8afb6a032d3f7a85ed5c62d66bbf63948b4938fcfe3d80f4af939fba1bdab159

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page