Skip to main content

Documents and large language models.

Project description

pypi python Build Status codecov pdm-managed


Logo

Docprompt

Document AI, powered by LLM's
Explore the docs »

· Report Bug · Request Feature

About

Docprompt is a library for Document AI. It aims to make enterprise-level document analysis easy thanks to the zero-shot capability of large language models.

Supercharged Document Analysis

  • Common utilities for interacting with PDFs
    • PDF loading and serialization
    • PDF byte compression using Ghostscript :ghost:
    • Fast rasterization :fire: :rocket:
    • Page splitting, re-export with PDFium
    • Document Search, powered by Rust :fire:
  • Support for most OCR providers with batched inference
    • Google :white_check_mark:
    • Amazon Textract :white_check_mark:
    • Tesseract :white_check_mark:
    • Azure Document Intelligence :red_circle:
  • Layout Aware Page Representation
    • Run Document Layout Analysis with text-only LLM's!
  • Prompt Garden for common document analysis tasks zero-shot, including:
    • Markerization (Pdf2Markdown)
    • Table Extraction
    • Page Classification
    • Key-value extraction (Coming soon)
    • Segmentation (Coming soon)

Documents and large language models

Features

  • Representations for common document layout types - TextBlock, BoundingBox, etc
  • Generic implementations of OCR providers
  • Document Search powered by Rust and R-trees :fire:
  • Table Extraction, Page Classification, PDF2Markdown

Installation

Use the package manager pip to install Docprompt.

pip install docprompt

With an OCR provider

pip install "docprompt[google]

With search support

pip install "docprompt[search]"

Usage

Simple Operations

from docprompt import load_document

# Load a document
document = load_document("path/to/my.pdf")

# Rasterize a single page using Ghostscript
page_number = 5
rastered = document.rasterize_page(page_number, dpi=120)

# Split a pdf based on a page range
document_2 = document.split(start=125, stop=130)

Converting a PDF to markdown

Coverting documents into markdown is a great way to prepare documents for downstream chunking or ingestion into a RAG system.

from docprompt import load_document_node
from docprompt.tasks.markerize import AnthropicMarkerizeProvider

document_node = load_document_node("path/to/my.pdf")
markerize_provider = AnthropicMarkerizeProvider()

markerized_document = markerize_provider.process_document_node(document_node)

Extracting Tables

Extract tables with SOTA speed and accuracy.

from docprompt import load_document_node
from docprompt.tasks.table_extraction import AnthropicTableExtractionProvider

document_node = load_document_node("path/to/my.pdf")
table_extraction_provider = AnthropicTableExtractionProvider()

extracted_tables = table_extraction_provider.process_document_node(document_node)

Performing OCR

from docprompt import load_document, DocumentNode
from docprompt.tasks.ocr.gcp import GoogleOcrProvider

provider = GoogleOcrProvider.from_service_account_file(
  project_id=my_project_id,
  processor_id=my_processor_id,
  service_account_file=path_to_service_file
)

document = load_document("path/to/my.pdf")

# A container holds derived data for a document, like OCR or classification results
document_node = DocumentNode.from_document(document)

provider.process_document_node(document_node) # Caches results on the document_node

document_node[0].ocr_result # Access OCR results

Document Search

When a large language model returns a result, we might want to highlight that result for our users. However, language models return results as text, while what we need to show our users requires a page number and a bounding box.

After extracting text from a PDF, we can support this pattern using DocumentProvenanceLocator, which lives on a DocumentNode

from docprompt import load_document, DocumentNode
from docprompt.tasks.ocr.gcp import GoogleOcrProvider

provider = GoogleOcrProvider.from_service_account_file(
  project_id=my_project_id,
  processor_id=my_processor_id,
  service_account_file=path_to_service_file
)

document = load_document("path/to/my.pdf")

# A container holds derived data for a document, like OCR or classification results
document_node = DocumentNode.from_document(document)

provider.process_document_node(document_node) # Caches results on the document_node

# With OCR results available, we can now instantiate a locator and search through documents.

document_node.locator.search("John Doe") # This will return a list of all terms across the document that contain "John Doe"
document_node.locator.search("Jane Doe", page_number=4) # Just return results a list of matching results from page 4

This functionality uses a combination of rtree and the Rust library tantivy, allowing you to perform thousands of searches in seconds :fire: :rocket:

trackgit-views

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

docprompt-0.8.6.tar.gz (19.7 MB view details)

Uploaded Source

Built Distribution

docprompt-0.8.6-py3-none-any.whl (13.6 MB view details)

Uploaded Python 3

File details

Details for the file docprompt-0.8.6.tar.gz.

File metadata

  • Download URL: docprompt-0.8.6.tar.gz
  • Upload date:
  • Size: 19.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.19.1 CPython/3.10.12 Linux/6.8.0-45-generic

File hashes

Hashes for docprompt-0.8.6.tar.gz
Algorithm Hash digest
SHA256 16a9763f199bf89357b96f05d4f26ed9bae75e1bc6b3b21822af5f71e4507c41
MD5 c9878d1aa7db480ce042ce1282654d52
BLAKE2b-256 75ab27ab68d9227b5d93c7e8acc744c6d7bd568ab2a0a307528072264e64e450

See more details on using hashes here.

File details

Details for the file docprompt-0.8.6-py3-none-any.whl.

File metadata

  • Download URL: docprompt-0.8.6-py3-none-any.whl
  • Upload date:
  • Size: 13.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.19.1 CPython/3.10.12 Linux/6.8.0-45-generic

File hashes

Hashes for docprompt-0.8.6-py3-none-any.whl
Algorithm Hash digest
SHA256 172d6d7aaf187369e4ac01f3f75749911a08ec5cb2ee516223338b87aca68de3
MD5 75d981b6f96f2584d7a285e080ccc3a1
BLAKE2b-256 5d2ab2dc23f609370234752c96af07c262611bc4795b12d8f9a83e00c5c8f083

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page