Skip to main content

Documents and large language models.

Project description

pypi python Build Status codecov pdm-managed


Logo

Docprompt

Document AI, powered by LLM's
Explore the docs »

· Report Bug · Request Feature

About

Docprompt is a library for Document AI. It aims to make enterprise-level document analysis easy thanks to the zero-shot capability of large language models.

Supercharged Document Analysis

  • Common utilities for interacting with PDFs
    • PDF loading and serialization
    • PDF byte compression using Ghostscript :ghost:
    • Fast rasterization :fire: :rocket:
    • Page splitting, re-export with PDFium
    • Document Search, powered by Rust :fire:
  • Support for most OCR providers with batched inference
    • Google :white_check_mark:
    • Amazon Textract :white_check_mark:
    • Tesseract :white_check_mark:
    • Azure Document Intelligence :red_circle:
  • Layout Aware Page Representation
    • Run Document Layout Analysis with text-only LLM's!
  • Prompt Garden for common document analysis tasks zero-shot, including:
    • Markerization (Pdf2Markdown)
    • Table Extraction
    • Page Classification
    • Key-value extraction (Coming soon)
    • Segmentation (Coming soon)

Documents and large language models

Features

  • Representations for common document layout types - TextBlock, BoundingBox, etc
  • Generic implementations of OCR providers
  • Document Search powered by Rust and R-trees :fire:
  • Table Extraction, Page Classification, PDF2Markdown

Installation

Use the package manager pip to install Docprompt.

pip install docprompt

With an OCR provider

pip install "docprompt[google]

With search support

pip install "docprompt[search]"

Usage

Simple Operations

from docprompt import load_document

# Load a document
document = load_document("path/to/my.pdf")

# Rasterize a single page using Ghostscript
page_number = 5
rastered = document.rasterize_page(page_number, dpi=120)

# Split a pdf based on a page range
document_2 = document.split(start=125, stop=130)

Converting a PDF to markdown

Coverting documents into markdown is a great way to prepare documents for downstream chunking or ingestion into a RAG system.

from docprompt import load_document_node
from docprompt.tasks.markerize import AnthropicMarkerizeProvider

document_node = load_document_node("path/to/my.pdf")
markerize_provider = AnthropicMarkerizeProvider()

markerized_document = markerize_provider.process_document_node(document_node)

Extracting Tables

Extract tables with SOTA speed and accuracy.

from docprompt import load_document_node
from docprompt.tasks.table_extraction import AnthropicTableExtractionProvider

document_node = load_document_node("path/to/my.pdf")
table_extraction_provider = AnthropicTableExtractionProvider()

extracted_tables = table_extraction_provider.process_document_node(document_node)

Performing OCR

from docprompt import load_document, DocumentNode
from docprompt.tasks.ocr.gcp import GoogleOcrProvider

provider = GoogleOcrProvider.from_service_account_file(
  project_id=my_project_id,
  processor_id=my_processor_id,
  service_account_file=path_to_service_file
)

document = load_document("path/to/my.pdf")

# A container holds derived data for a document, like OCR or classification results
document_node = DocumentNode.from_document(document)

provider.process_document_node(document_node) # Caches results on the document_node

document_node[0].ocr_result # Access OCR results

Document Search

When a large language model returns a result, we might want to highlight that result for our users. However, language models return results as text, while what we need to show our users requires a page number and a bounding box.

After extracting text from a PDF, we can support this pattern using DocumentProvenanceLocator, which lives on a DocumentNode

from docprompt import load_document, DocumentNode
from docprompt.tasks.ocr.gcp import GoogleOcrProvider

provider = GoogleOcrProvider.from_service_account_file(
  project_id=my_project_id,
  processor_id=my_processor_id,
  service_account_file=path_to_service_file
)

document = load_document("path/to/my.pdf")

# A container holds derived data for a document, like OCR or classification results
document_node = DocumentNode.from_document(document)

provider.process_document_node(document_node) # Caches results on the document_node

# With OCR results available, we can now instantiate a locator and search through documents.

document_node.locator.search("John Doe") # This will return a list of all terms across the document that contain "John Doe"
document_node.locator.search("Jane Doe", page_number=4) # Just return results a list of matching results from page 4

This functionality uses a combination of rtree and the Rust library tantivy, allowing you to perform thousands of searches in seconds :fire: :rocket:

trackgit-views

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

docprompt-0.8.8.tar.gz (19.7 MB view details)

Uploaded Source

Built Distribution

docprompt-0.8.8-py3-none-any.whl (13.6 MB view details)

Uploaded Python 3

File details

Details for the file docprompt-0.8.8.tar.gz.

File metadata

  • Download URL: docprompt-0.8.8.tar.gz
  • Upload date:
  • Size: 19.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.19.1 CPython/3.10.12 Linux/6.8.0-45-generic

File hashes

Hashes for docprompt-0.8.8.tar.gz
Algorithm Hash digest
SHA256 e13e90cc7a8b3f7d36936c75e7bac80927bd7a2ef3025e46f55ea3d742db58f5
MD5 e4190502756e0390d2060150c73ea603
BLAKE2b-256 a941f3be1173c34411f0c8ef3a66cf12b37d0efddcf0025647661d11a2e3bc7f

See more details on using hashes here.

File details

Details for the file docprompt-0.8.8-py3-none-any.whl.

File metadata

  • Download URL: docprompt-0.8.8-py3-none-any.whl
  • Upload date:
  • Size: 13.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.19.1 CPython/3.10.12 Linux/6.8.0-45-generic

File hashes

Hashes for docprompt-0.8.8-py3-none-any.whl
Algorithm Hash digest
SHA256 4eff56edd2438ade68e3eb5fff192019f3f55fae8b7df5c57b98a91b2f8cccc8
MD5 53f3e384b1fe1813e28414a74b4b4d3c
BLAKE2b-256 ac36053cad6b44fe43f0f7fda547723aa891859522ecbab6e08f10ca0660534d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page